00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 627 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3292 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.128 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.162 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.194 Using shallow fetch with depth 1 00:00:00.194 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.194 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.261 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.261 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.297 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.310 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.321 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.321 > git config core.sparsecheckout # timeout=10 00:00:04.332 > git read-tree -mu HEAD # timeout=10 00:00:04.350 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.371 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.371 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.460 [Pipeline] Start of Pipeline 00:00:04.471 [Pipeline] library 00:00:04.473 Loading library shm_lib@master 00:00:04.473 Library shm_lib@master is cached. Copying from home. 00:00:04.487 [Pipeline] node 00:00:04.493 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:00:04.495 [Pipeline] { 00:00:04.503 [Pipeline] catchError 00:00:04.504 [Pipeline] { 00:00:04.514 [Pipeline] wrap 00:00:04.520 [Pipeline] { 00:00:04.526 [Pipeline] stage 00:00:04.527 [Pipeline] { (Prologue) 00:00:04.542 [Pipeline] echo 00:00:04.543 Node: VM-host-SM0 00:00:04.547 [Pipeline] cleanWs 00:00:04.553 [WS-CLEANUP] Deleting project workspace... 00:00:04.553 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.559 [WS-CLEANUP] done 00:00:04.715 [Pipeline] setCustomBuildProperty 00:00:04.776 [Pipeline] httpRequest 00:00:04.805 [Pipeline] echo 00:00:04.806 Sorcerer 10.211.164.101 is alive 00:00:04.814 [Pipeline] httpRequest 00:00:04.817 HttpMethod: GET 00:00:04.818 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.818 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:04.831 Response Code: HTTP/1.1 200 OK 00:00:04.831 Success: Status code 200 is in the accepted range: 200,404 00:00:04.831 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.176 [Pipeline] sh 00:00:06.471 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:06.486 [Pipeline] httpRequest 00:00:06.511 [Pipeline] echo 00:00:06.513 Sorcerer 10.211.164.101 is alive 00:00:06.518 [Pipeline] httpRequest 00:00:06.521 HttpMethod: GET 00:00:06.522 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:06.522 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:06.531 Response Code: HTTP/1.1 200 OK 00:00:06.532 Success: Status code 200 is in the accepted range: 200,404 00:00:06.532 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:52.735 [Pipeline] sh 00:00:53.015 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:56.331 [Pipeline] sh 00:00:56.608 + git -C spdk log --oneline -n5 00:00:56.609 dbef7efac test: fix dpdk builds on ubuntu24 00:00:56.609 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:56.609 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:56.609 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:56.609 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:56.630 [Pipeline] withCredentials 00:00:56.641 > git --version # timeout=10 00:00:56.655 > git --version # 'git version 2.39.2' 00:00:56.670 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:56.672 [Pipeline] { 00:00:56.683 [Pipeline] retry 00:00:56.685 [Pipeline] { 00:00:56.707 [Pipeline] sh 00:00:56.987 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:56.997 [Pipeline] } 00:00:57.017 [Pipeline] // retry 00:00:57.021 [Pipeline] } 00:00:57.038 [Pipeline] // withCredentials 00:00:57.046 [Pipeline] httpRequest 00:00:57.062 [Pipeline] echo 00:00:57.064 Sorcerer 10.211.164.101 is alive 00:00:57.071 [Pipeline] httpRequest 00:00:57.074 HttpMethod: GET 00:00:57.075 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.075 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.076 Response Code: HTTP/1.1 200 OK 00:00:57.077 Success: Status code 200 is in the accepted range: 200,404 00:00:57.077 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest_2/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.811 [Pipeline] sh 00:01:04.089 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:06.002 [Pipeline] sh 00:01:06.284 + git -C dpdk log --oneline -n5 00:01:06.284 caf0f5d395 version: 22.11.4 00:01:06.284 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:06.284 dc9c799c7d vhost: fix missing spinlock unlock 00:01:06.284 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:06.284 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:06.302 [Pipeline] writeFile 00:01:06.320 [Pipeline] sh 00:01:06.601 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:06.612 [Pipeline] sh 00:01:06.892 + cat autorun-spdk.conf 00:01:06.892 SPDK_TEST_UNITTEST=1 00:01:06.892 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.892 SPDK_TEST_NVME=1 00:01:06.892 SPDK_TEST_BLOCKDEV=1 00:01:06.892 SPDK_RUN_ASAN=1 00:01:06.892 SPDK_RUN_UBSAN=1 00:01:06.892 SPDK_TEST_RAID5=1 00:01:06.892 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:06.892 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:06.892 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.902 RUN_NIGHTLY=1 00:01:06.918 [Pipeline] } 00:01:06.944 [Pipeline] // stage 00:01:06.954 [Pipeline] stage 00:01:06.956 [Pipeline] { (Run VM) 00:01:06.963 [Pipeline] sh 00:01:07.236 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:07.236 + echo 'Start stage prepare_nvme.sh' 00:01:07.236 Start stage prepare_nvme.sh 00:01:07.236 + [[ -n 2 ]] 00:01:07.236 + disk_prefix=ex2 00:01:07.236 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest_2 ]] 00:01:07.236 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf ]] 00:01:07.236 + source /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf 00:01:07.236 ++ SPDK_TEST_UNITTEST=1 00:01:07.236 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.236 ++ SPDK_TEST_NVME=1 00:01:07.236 ++ SPDK_TEST_BLOCKDEV=1 00:01:07.236 ++ SPDK_RUN_ASAN=1 00:01:07.236 ++ SPDK_RUN_UBSAN=1 00:01:07.236 ++ SPDK_TEST_RAID5=1 00:01:07.236 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:07.236 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:07.236 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:07.236 ++ RUN_NIGHTLY=1 00:01:07.236 + cd /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:07.236 + nvme_files=() 00:01:07.236 + declare -A nvme_files 00:01:07.236 + backend_dir=/var/lib/libvirt/images/backends 00:01:07.236 + nvme_files['nvme.img']=5G 00:01:07.236 + nvme_files['nvme-cmb.img']=5G 00:01:07.236 + nvme_files['nvme-multi0.img']=4G 00:01:07.236 + nvme_files['nvme-multi1.img']=4G 00:01:07.236 + nvme_files['nvme-multi2.img']=4G 00:01:07.236 + nvme_files['nvme-openstack.img']=8G 00:01:07.236 + nvme_files['nvme-zns.img']=5G 00:01:07.236 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:07.236 + (( SPDK_TEST_FTL == 1 )) 00:01:07.236 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:07.236 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:07.236 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:07.236 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:07.236 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:07.236 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:07.236 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:07.236 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.236 + for nvme in "${!nvme_files[@]}" 00:01:07.236 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:08.171 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:08.171 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:08.171 + echo 'End stage prepare_nvme.sh' 00:01:08.171 End stage prepare_nvme.sh 00:01:08.183 [Pipeline] sh 00:01:08.463 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:08.463 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -H -a -v -f ubuntu2204 00:01:08.463 00:01:08.463 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant 00:01:08.463 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk 00:01:08.463 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:08.463 HELP=0 00:01:08.464 DRY_RUN=0 00:01:08.464 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img, 00:01:08.464 NVME_DISKS_TYPE=nvme, 00:01:08.464 NVME_AUTO_CREATE=0 00:01:08.464 NVME_DISKS_NAMESPACES=, 00:01:08.464 NVME_CMB=, 00:01:08.464 NVME_PMR=, 00:01:08.464 NVME_ZNS=, 00:01:08.464 NVME_MS=, 00:01:08.464 NVME_FDP=, 00:01:08.464 SPDK_VAGRANT_DISTRO=ubuntu2204 00:01:08.464 SPDK_VAGRANT_VMCPU=10 00:01:08.464 SPDK_VAGRANT_VMRAM=12288 00:01:08.464 SPDK_VAGRANT_PROVIDER=libvirt 00:01:08.464 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:08.464 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:08.464 SPDK_OPENSTACK_NETWORK=0 00:01:08.464 VAGRANT_PACKAGE_BOX=0 00:01:08.464 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:08.464 FORCE_DISTRO=true 00:01:08.464 VAGRANT_BOX_VERSION= 00:01:08.464 EXTRA_VAGRANTFILES= 00:01:08.464 NIC_MODEL=e1000 00:01:08.464 00:01:08.464 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt' 00:01:08.464 /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest_2 00:01:11.770 Bringing machine 'default' up with 'libvirt' provider... 00:01:12.337 ==> default: Creating image (snapshot of base box volume). 00:01:12.595 ==> default: Creating domain with the following settings... 00:01:12.596 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1721816798_472c4ec61bbbb456eadd 00:01:12.596 ==> default: -- Domain type: kvm 00:01:12.596 ==> default: -- Cpus: 10 00:01:12.596 ==> default: -- Feature: acpi 00:01:12.596 ==> default: -- Feature: apic 00:01:12.596 ==> default: -- Feature: pae 00:01:12.596 ==> default: -- Memory: 12288M 00:01:12.596 ==> default: -- Memory Backing: hugepages: 00:01:12.596 ==> default: -- Management MAC: 00:01:12.596 ==> default: -- Loader: 00:01:12.596 ==> default: -- Nvram: 00:01:12.596 ==> default: -- Base box: spdk/ubuntu2204 00:01:12.596 ==> default: -- Storage pool: default 00:01:12.596 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1721816798_472c4ec61bbbb456eadd.img (20G) 00:01:12.596 ==> default: -- Volume Cache: default 00:01:12.596 ==> default: -- Kernel: 00:01:12.596 ==> default: -- Initrd: 00:01:12.596 ==> default: -- Graphics Type: vnc 00:01:12.596 ==> default: -- Graphics Port: -1 00:01:12.596 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.596 ==> default: -- Graphics Password: Not defined 00:01:12.596 ==> default: -- Video Type: cirrus 00:01:12.596 ==> default: -- Video VRAM: 9216 00:01:12.596 ==> default: -- Sound Type: 00:01:12.596 ==> default: -- Keymap: en-us 00:01:12.596 ==> default: -- TPM Path: 00:01:12.596 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.596 ==> default: -- Command line args: 00:01:12.596 ==> default: -> value=-device, 00:01:12.596 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:12.596 ==> default: -> value=-drive, 00:01:12.596 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:01:12.596 ==> default: -> value=-device, 00:01:12.596 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.853 ==> default: Creating shared folders metadata... 00:01:12.853 ==> default: Starting domain. 00:01:14.229 ==> default: Waiting for domain to get an IP address... 00:01:26.490 ==> default: Waiting for SSH to become available... 00:01:29.030 ==> default: Configuring and enabling network interfaces... 00:01:33.213 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.479 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:42.666 ==> default: Mounting SSHFS shared folder... 00:01:43.600 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:43.600 ==> default: Checking Mount.. 00:01:44.167 ==> default: Folder Successfully Mounted! 00:01:44.167 ==> default: Running provisioner: file... 00:01:44.734 default: ~/.gitconfig => .gitconfig 00:01:44.992 00:01:44.992 SUCCESS! 00:01:44.992 00:01:44.992 cd to /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:44.992 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.992 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:44.992 00:01:45.002 [Pipeline] } 00:01:45.022 [Pipeline] // stage 00:01:45.034 [Pipeline] dir 00:01:45.034 Running in /var/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt 00:01:45.037 [Pipeline] { 00:01:45.053 [Pipeline] catchError 00:01:45.056 [Pipeline] { 00:01:45.074 [Pipeline] sh 00:01:45.354 + vagrant ssh-config --host vagrant 00:01:45.354 + sed -ne /^Host/,$p 00:01:45.354 + tee ssh_conf 00:01:49.547 Host vagrant 00:01:49.547 HostName 192.168.121.29 00:01:49.547 User vagrant 00:01:49.547 Port 22 00:01:49.547 UserKnownHostsFile /dev/null 00:01:49.547 StrictHostKeyChecking no 00:01:49.547 PasswordAuthentication no 00:01:49.547 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:49.547 IdentitiesOnly yes 00:01:49.547 LogLevel FATAL 00:01:49.547 ForwardAgent yes 00:01:49.547 ForwardX11 yes 00:01:49.547 00:01:49.560 [Pipeline] withEnv 00:01:49.562 [Pipeline] { 00:01:49.579 [Pipeline] sh 00:01:49.891 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:49.891 source /etc/os-release 00:01:49.891 [[ -e /image.version ]] && img=$(< /image.version) 00:01:49.891 # Minimal, systemd-like check. 00:01:49.891 if [[ -e /.dockerenv ]]; then 00:01:49.891 # Clear garbage from the node's name: 00:01:49.891 # agt-er_autotest_547-896 -> autotest_547-896 00:01:49.891 # $HOSTNAME is the actual container id 00:01:49.891 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:49.891 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:49.891 # We can assume this is a mount from a host where container is running, 00:01:49.891 # so fetch its hostname to easily identify the target swarm worker. 00:01:49.891 container="$(< /etc/hostname) ($agent)" 00:01:49.891 else 00:01:49.891 # Fallback 00:01:49.891 container=$agent 00:01:49.891 fi 00:01:49.891 fi 00:01:49.891 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:49.891 00:01:50.160 [Pipeline] } 00:01:50.180 [Pipeline] // withEnv 00:01:50.190 [Pipeline] setCustomBuildProperty 00:01:50.209 [Pipeline] stage 00:01:50.210 [Pipeline] { (Tests) 00:01:50.227 [Pipeline] sh 00:01:50.508 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:50.783 [Pipeline] sh 00:01:51.066 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.339 [Pipeline] timeout 00:01:51.339 Timeout set to expire in 1 hr 30 min 00:01:51.341 [Pipeline] { 00:01:51.359 [Pipeline] sh 00:01:51.636 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:52.202 HEAD is now at dbef7efac test: fix dpdk builds on ubuntu24 00:01:52.215 [Pipeline] sh 00:01:52.494 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:52.765 [Pipeline] sh 00:01:53.043 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.317 [Pipeline] sh 00:01:53.595 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:53.854 ++ readlink -f spdk_repo 00:01:53.854 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:53.854 + [[ -n /home/vagrant/spdk_repo ]] 00:01:53.854 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:53.854 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:53.854 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:53.854 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:53.854 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:53.854 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:53.854 + cd /home/vagrant/spdk_repo 00:01:53.854 + source /etc/os-release 00:01:53.854 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:53.854 ++ NAME=Ubuntu 00:01:53.854 ++ VERSION_ID=22.04 00:01:53.854 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:53.854 ++ VERSION_CODENAME=jammy 00:01:53.854 ++ ID=ubuntu 00:01:53.854 ++ ID_LIKE=debian 00:01:53.854 ++ HOME_URL=https://www.ubuntu.com/ 00:01:53.854 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:53.854 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:53.854 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:53.854 ++ UBUNTU_CODENAME=jammy 00:01:53.854 + uname -a 00:01:53.854 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:53.854 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:53.854 Hugepages 00:01:53.854 node hugesize free / total 00:01:53.854 node0 1048576kB 0 / 0 00:01:53.854 node0 2048kB 0 / 0 00:01:53.854 00:01:53.854 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.854 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:54.112 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:54.112 + rm -f /tmp/spdk-ld-path 00:01:54.112 + source autorun-spdk.conf 00:01:54.112 ++ SPDK_TEST_UNITTEST=1 00:01:54.112 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.112 ++ SPDK_TEST_NVME=1 00:01:54.112 ++ SPDK_TEST_BLOCKDEV=1 00:01:54.112 ++ SPDK_RUN_ASAN=1 00:01:54.112 ++ SPDK_RUN_UBSAN=1 00:01:54.112 ++ SPDK_TEST_RAID5=1 00:01:54.112 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:54.112 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:54.112 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.112 ++ RUN_NIGHTLY=1 00:01:54.112 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:54.112 + [[ -n '' ]] 00:01:54.112 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:54.112 + for M in /var/spdk/build-*-manifest.txt 00:01:54.112 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:54.112 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.112 + for M in /var/spdk/build-*-manifest.txt 00:01:54.112 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:54.112 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:54.112 ++ uname 00:01:54.112 + [[ Linux == \L\i\n\u\x ]] 00:01:54.112 + sudo dmesg -T 00:01:54.112 + sudo dmesg --clear 00:01:54.112 + dmesg_pid=2272 00:01:54.112 + [[ Ubuntu == FreeBSD ]] 00:01:54.112 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:54.112 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:54.112 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:54.112 + sudo dmesg -Tw 00:01:54.112 + [[ -x /usr/src/fio-static/fio ]] 00:01:54.112 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:54.112 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:54.112 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:54.112 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:54.112 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:54.112 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:54.112 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:54.112 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:54.112 Test configuration: 00:01:54.112 SPDK_TEST_UNITTEST=1 00:01:54.112 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.112 SPDK_TEST_NVME=1 00:01:54.112 SPDK_TEST_BLOCKDEV=1 00:01:54.112 SPDK_RUN_ASAN=1 00:01:54.112 SPDK_RUN_UBSAN=1 00:01:54.112 SPDK_TEST_RAID5=1 00:01:54.112 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:54.112 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:54.112 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.113 RUN_NIGHTLY=1 10:27:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:54.113 10:27:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:54.113 10:27:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:54.113 10:27:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:54.113 10:27:20 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.113 10:27:20 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.113 10:27:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.113 10:27:20 -- paths/export.sh@5 -- $ export PATH 00:01:54.113 10:27:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:54.113 10:27:20 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:54.113 10:27:20 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:54.113 10:27:20 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721816840.XXXXXX 00:01:54.113 10:27:20 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721816840.RjTXHY 00:01:54.113 10:27:20 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:54.113 10:27:20 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:01:54.113 10:27:20 -- common/autobuild_common.sh@445 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:54.113 10:27:20 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:54.113 10:27:20 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:54.113 10:27:20 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:54.113 10:27:20 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:54.113 10:27:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:54.113 10:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.113 10:27:20 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:54.113 10:27:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.113 10:27:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.113 10:27:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.113 10:27:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.113 Wed Jul 24 10:27:20 UTC 2024 00:01:54.113 10:27:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.113 LTS-60-gdbef7efac 00:01:54.113 10:27:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:54.113 10:27:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:54.113 10:27:20 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:54.113 10:27:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:54.113 10:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.113 ************************************ 00:01:54.113 START TEST asan 00:01:54.113 ************************************ 00:01:54.113 using asan 00:01:54.113 10:27:20 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:54.113 00:01:54.113 real 0m0.000s 00:01:54.113 user 0m0.000s 00:01:54.113 sys 0m0.000s 00:01:54.113 10:27:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.113 ************************************ 00:01:54.113 END TEST asan 00:01:54.113 10:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.113 ************************************ 00:01:54.371 10:27:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.371 10:27:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.371 10:27:20 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:54.371 10:27:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:54.371 10:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.371 ************************************ 00:01:54.371 START TEST ubsan 00:01:54.371 ************************************ 00:01:54.371 10:27:20 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:54.371 using ubsan 00:01:54.371 00:01:54.371 real 0m0.000s 00:01:54.371 user 0m0.000s 00:01:54.371 sys 0m0.000s 00:01:54.371 10:27:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:54.371 10:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.371 ************************************ 00:01:54.371 END TEST ubsan 00:01:54.371 ************************************ 00:01:54.371 10:27:20 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:54.372 10:27:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:54.372 10:27:20 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:54.372 10:27:20 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:54.372 10:27:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:54.372 10:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.372 ************************************ 00:01:54.372 START TEST build_native_dpdk 00:01:54.372 ************************************ 00:01:54.372 10:27:20 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:54.372 10:27:20 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:54.372 10:27:20 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:54.372 10:27:20 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:54.372 10:27:20 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:54.372 10:27:20 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:54.372 10:27:20 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:54.372 10:27:20 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:54.372 10:27:20 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:54.372 10:27:20 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:54.372 10:27:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:54.372 10:27:20 -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:01:54.372 10:27:20 -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:01:54.372 10:27:20 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:54.372 10:27:20 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:54.372 10:27:20 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:54.372 10:27:20 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:54.372 10:27:20 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:54.372 caf0f5d395 version: 22.11.4 00:01:54.372 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:54.372 dc9c799c7d vhost: fix missing spinlock unlock 00:01:54.372 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:54.372 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:54.372 10:27:20 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:54.372 10:27:20 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:54.372 10:27:20 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:54.372 10:27:20 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:54.372 10:27:20 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:54.372 10:27:20 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:54.372 10:27:20 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:54.372 10:27:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:54.372 10:27:20 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:54.372 10:27:20 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:54.372 10:27:20 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:54.372 10:27:20 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:54.372 10:27:20 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:54.372 10:27:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:54.372 10:27:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:54.372 10:27:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:54.372 10:27:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:54.372 10:27:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.372 10:27:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:54.372 10:27:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:54.372 10:27:20 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:54.372 10:27:20 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:54.372 10:27:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:54.372 10:27:20 -- scripts/common.sh@343 -- $ case "$op" in 00:01:54.372 10:27:20 -- scripts/common.sh@344 -- $ : 1 00:01:54.372 10:27:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:54.372 10:27:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.372 10:27:20 -- scripts/common.sh@364 -- $ decimal 22 00:01:54.372 10:27:20 -- scripts/common.sh@352 -- $ local d=22 00:01:54.372 10:27:20 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.372 10:27:20 -- scripts/common.sh@354 -- $ echo 22 00:01:54.372 10:27:20 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:54.372 10:27:20 -- scripts/common.sh@365 -- $ decimal 21 00:01:54.372 10:27:20 -- scripts/common.sh@352 -- $ local d=21 00:01:54.372 10:27:20 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:54.372 10:27:20 -- scripts/common.sh@354 -- $ echo 21 00:01:54.372 10:27:20 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:54.372 10:27:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:54.372 10:27:20 -- scripts/common.sh@366 -- $ return 1 00:01:54.372 10:27:20 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:54.372 patching file config/rte_config.h 00:01:54.372 Hunk #1 succeeded at 60 (offset 1 line). 00:01:54.372 10:27:20 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:54.372 10:27:20 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:54.372 10:27:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:54.372 10:27:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:54.372 10:27:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:54.372 10:27:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:54.372 10:27:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.372 10:27:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:54.372 10:27:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:54.372 10:27:20 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:54.372 10:27:20 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:54.372 10:27:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:54.372 10:27:20 -- scripts/common.sh@343 -- $ case "$op" in 00:01:54.372 10:27:20 -- scripts/common.sh@344 -- $ : 1 00:01:54.372 10:27:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:54.372 10:27:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.372 10:27:20 -- scripts/common.sh@364 -- $ decimal 22 00:01:54.372 10:27:20 -- scripts/common.sh@352 -- $ local d=22 00:01:54.372 10:27:20 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.372 10:27:20 -- scripts/common.sh@354 -- $ echo 22 00:01:54.372 10:27:20 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:54.372 10:27:20 -- scripts/common.sh@365 -- $ decimal 24 00:01:54.372 10:27:20 -- scripts/common.sh@352 -- $ local d=24 00:01:54.372 10:27:20 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.372 10:27:20 -- scripts/common.sh@354 -- $ echo 24 00:01:54.372 10:27:20 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:54.372 10:27:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:54.372 10:27:20 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:54.372 10:27:20 -- scripts/common.sh@367 -- $ return 0 00:01:54.372 10:27:20 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:54.372 patching file lib/pcapng/rte_pcapng.c 00:01:54.372 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:54.372 10:27:20 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:54.372 10:27:20 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:54.372 10:27:20 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:54.372 10:27:20 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:54.372 10:27:20 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:59.635 The Meson build system 00:01:59.635 Version: 1.4.0 00:01:59.635 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:59.635 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:59.635 Build type: native build 00:01:59.635 Program cat found: YES (/usr/bin/cat) 00:01:59.635 Project name: DPDK 00:01:59.635 Project version: 22.11.4 00:01:59.635 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:01:59.635 C linker for the host machine: gcc ld.bfd 2.38 00:01:59.635 Host machine cpu family: x86_64 00:01:59.635 Host machine cpu: x86_64 00:01:59.635 Message: ## Building in Developer Mode ## 00:01:59.635 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.635 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:59.635 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.635 Program objdump found: YES (/usr/bin/objdump) 00:01:59.635 Program python3 found: YES (/usr/bin/python3) 00:01:59.635 Program cat found: YES (/usr/bin/cat) 00:01:59.635 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:59.635 Checking for size of "void *" : 8 00:01:59.635 Checking for size of "void *" : 8 (cached) 00:01:59.635 Library m found: YES 00:01:59.635 Library numa found: YES 00:01:59.635 Has header "numaif.h" : YES 00:01:59.635 Library fdt found: NO 00:01:59.635 Library execinfo found: NO 00:01:59.635 Has header "execinfo.h" : YES 00:01:59.635 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:01:59.635 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.635 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.635 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.635 Run-time dependency openssl found: YES 3.0.2 00:01:59.635 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:59.635 Library pcap found: NO 00:01:59.635 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.635 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.635 Compiler for C supports arguments -Wformat: YES 00:01:59.635 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:59.635 Compiler for C supports arguments -Wformat-security: YES 00:01:59.635 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.635 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.635 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.635 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.635 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.635 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.635 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.635 Compiler for C supports arguments -Wundef: YES 00:01:59.635 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.635 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.635 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.635 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.635 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.635 Compiler for C supports arguments -mavx512f: YES 00:01:59.635 Checking if "AVX512 checking" compiles: YES 00:01:59.635 Fetching value of define "__SSE4_2__" : 1 00:01:59.635 Fetching value of define "__AES__" : 1 00:01:59.635 Fetching value of define "__AVX__" : 1 00:01:59.635 Fetching value of define "__AVX2__" : 1 00:01:59.635 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.635 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.635 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.635 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.635 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.635 Fetching value of define "__PCLMUL__" : 1 00:01:59.635 Fetching value of define "__RDRND__" : 1 00:01:59.635 Fetching value of define "__RDSEED__" : 1 00:01:59.635 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.635 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.635 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.635 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.635 Checking for function "getentropy" : YES 00:01:59.635 Message: lib/eal: Defining dependency "eal" 00:01:59.635 Message: lib/ring: Defining dependency "ring" 00:01:59.635 Message: lib/rcu: Defining dependency "rcu" 00:01:59.635 Message: lib/mempool: Defining dependency "mempool" 00:01:59.635 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.635 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.635 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.635 Compiler for C supports arguments -mpclmul: YES 00:01:59.635 Compiler for C supports arguments -maes: YES 00:01:59.635 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.635 Compiler for C supports arguments -mavx512bw: YES 00:01:59.635 Compiler for C supports arguments -mavx512dq: YES 00:01:59.635 Compiler for C supports arguments -mavx512vl: YES 00:01:59.635 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.635 Compiler for C supports arguments -mavx2: YES 00:01:59.635 Compiler for C supports arguments -mavx: YES 00:01:59.635 Message: lib/net: Defining dependency "net" 00:01:59.635 Message: lib/meter: Defining dependency "meter" 00:01:59.635 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.635 Message: lib/pci: Defining dependency "pci" 00:01:59.635 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.635 Message: lib/metrics: Defining dependency "metrics" 00:01:59.635 Message: lib/hash: Defining dependency "hash" 00:01:59.635 Message: lib/timer: Defining dependency "timer" 00:01:59.635 Fetching value of define "__AVX2__" : 1 (cached) 00:01:59.635 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.635 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:59.636 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:59.636 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:59.636 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:59.636 Message: lib/acl: Defining dependency "acl" 00:01:59.636 Message: lib/bbdev: Defining dependency "bbdev" 00:01:59.636 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:59.636 Run-time dependency libelf found: YES 0.186 00:01:59.636 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:01:59.636 Message: lib/bpf: Defining dependency "bpf" 00:01:59.636 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:59.636 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.636 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.636 Message: lib/distributor: Defining dependency "distributor" 00:01:59.636 Message: lib/efd: Defining dependency "efd" 00:01:59.636 Message: lib/eventdev: Defining dependency "eventdev" 00:01:59.636 Message: lib/gpudev: Defining dependency "gpudev" 00:01:59.636 Message: lib/gro: Defining dependency "gro" 00:01:59.636 Message: lib/gso: Defining dependency "gso" 00:01:59.636 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:59.636 Message: lib/jobstats: Defining dependency "jobstats" 00:01:59.636 Message: lib/latencystats: Defining dependency "latencystats" 00:01:59.636 Message: lib/lpm: Defining dependency "lpm" 00:01:59.636 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.636 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.636 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:59.636 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:59.636 Message: lib/member: Defining dependency "member" 00:01:59.636 Message: lib/pcapng: Defining dependency "pcapng" 00:01:59.636 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.636 Message: lib/power: Defining dependency "power" 00:01:59.636 Message: lib/rawdev: Defining dependency "rawdev" 00:01:59.636 Message: lib/regexdev: Defining dependency "regexdev" 00:01:59.636 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.636 Message: lib/rib: Defining dependency "rib" 00:01:59.636 Message: lib/reorder: Defining dependency "reorder" 00:01:59.636 Message: lib/sched: Defining dependency "sched" 00:01:59.636 Message: lib/security: Defining dependency "security" 00:01:59.636 Message: lib/stack: Defining dependency "stack" 00:01:59.636 Has header "linux/userfaultfd.h" : YES 00:01:59.636 Message: lib/vhost: Defining dependency "vhost" 00:01:59.636 Message: lib/ipsec: Defining dependency "ipsec" 00:01:59.636 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.636 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:59.636 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:59.636 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:59.636 Message: lib/fib: Defining dependency "fib" 00:01:59.636 Message: lib/port: Defining dependency "port" 00:01:59.636 Message: lib/pdump: Defining dependency "pdump" 00:01:59.636 Message: lib/table: Defining dependency "table" 00:01:59.636 Message: lib/pipeline: Defining dependency "pipeline" 00:01:59.636 Message: lib/graph: Defining dependency "graph" 00:01:59.636 Message: lib/node: Defining dependency "node" 00:01:59.636 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.636 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.636 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.636 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.636 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:59.636 Compiler for C supports arguments -Wno-unused-value: YES 00:01:59.636 Compiler for C supports arguments -Wno-format: YES 00:01:59.636 Compiler for C supports arguments -Wno-format-security: YES 00:02:01.011 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:01.011 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:01.011 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:01.011 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:01.011 Fetching value of define "__AVX2__" : 1 (cached) 00:02:01.011 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:01.011 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.011 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:01.011 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:01.011 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:01.011 Program doxygen found: YES (/usr/bin/doxygen) 00:02:01.011 Configuring doxy-api.conf using configuration 00:02:01.011 Program sphinx-build found: NO 00:02:01.011 Configuring rte_build_config.h using configuration 00:02:01.011 Message: 00:02:01.011 ================= 00:02:01.011 Applications Enabled 00:02:01.011 ================= 00:02:01.011 00:02:01.011 apps: 00:02:01.011 pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 00:02:01.011 test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 00:02:01.011 00:02:01.011 00:02:01.011 Message: 00:02:01.011 ================= 00:02:01.011 Libraries Enabled 00:02:01.011 ================= 00:02:01.011 00:02:01.011 libs: 00:02:01.011 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:01.011 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:01.011 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:01.011 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:01.011 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:01.011 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:01.011 table, pipeline, graph, node, 00:02:01.011 00:02:01.011 Message: 00:02:01.011 =============== 00:02:01.011 Drivers Enabled 00:02:01.011 =============== 00:02:01.011 00:02:01.011 common: 00:02:01.011 00:02:01.011 bus: 00:02:01.011 pci, vdev, 00:02:01.011 mempool: 00:02:01.011 ring, 00:02:01.011 dma: 00:02:01.011 00:02:01.011 net: 00:02:01.011 i40e, 00:02:01.011 raw: 00:02:01.011 00:02:01.011 crypto: 00:02:01.011 00:02:01.011 compress: 00:02:01.011 00:02:01.011 regex: 00:02:01.011 00:02:01.011 vdpa: 00:02:01.011 00:02:01.011 event: 00:02:01.011 00:02:01.011 baseband: 00:02:01.011 00:02:01.011 gpu: 00:02:01.011 00:02:01.011 00:02:01.011 Message: 00:02:01.011 ================= 00:02:01.012 Content Skipped 00:02:01.012 ================= 00:02:01.012 00:02:01.012 apps: 00:02:01.012 dumpcap: missing dependency, "libpcap" 00:02:01.012 00:02:01.012 libs: 00:02:01.012 kni: explicitly disabled via build config (deprecated lib) 00:02:01.012 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:01.012 00:02:01.012 drivers: 00:02:01.012 common/cpt: not in enabled drivers build config 00:02:01.012 common/dpaax: not in enabled drivers build config 00:02:01.012 common/iavf: not in enabled drivers build config 00:02:01.012 common/idpf: not in enabled drivers build config 00:02:01.012 common/mvep: not in enabled drivers build config 00:02:01.012 common/octeontx: not in enabled drivers build config 00:02:01.012 bus/auxiliary: not in enabled drivers build config 00:02:01.012 bus/dpaa: not in enabled drivers build config 00:02:01.012 bus/fslmc: not in enabled drivers build config 00:02:01.012 bus/ifpga: not in enabled drivers build config 00:02:01.012 bus/vmbus: not in enabled drivers build config 00:02:01.012 common/cnxk: not in enabled drivers build config 00:02:01.012 common/mlx5: not in enabled drivers build config 00:02:01.012 common/qat: not in enabled drivers build config 00:02:01.012 common/sfc_efx: not in enabled drivers build config 00:02:01.012 mempool/bucket: not in enabled drivers build config 00:02:01.012 mempool/cnxk: not in enabled drivers build config 00:02:01.012 mempool/dpaa: not in enabled drivers build config 00:02:01.012 mempool/dpaa2: not in enabled drivers build config 00:02:01.012 mempool/octeontx: not in enabled drivers build config 00:02:01.012 mempool/stack: not in enabled drivers build config 00:02:01.012 dma/cnxk: not in enabled drivers build config 00:02:01.012 dma/dpaa: not in enabled drivers build config 00:02:01.012 dma/dpaa2: not in enabled drivers build config 00:02:01.012 dma/hisilicon: not in enabled drivers build config 00:02:01.012 dma/idxd: not in enabled drivers build config 00:02:01.012 dma/ioat: not in enabled drivers build config 00:02:01.012 dma/skeleton: not in enabled drivers build config 00:02:01.012 net/af_packet: not in enabled drivers build config 00:02:01.012 net/af_xdp: not in enabled drivers build config 00:02:01.012 net/ark: not in enabled drivers build config 00:02:01.012 net/atlantic: not in enabled drivers build config 00:02:01.012 net/avp: not in enabled drivers build config 00:02:01.012 net/axgbe: not in enabled drivers build config 00:02:01.012 net/bnx2x: not in enabled drivers build config 00:02:01.012 net/bnxt: not in enabled drivers build config 00:02:01.012 net/bonding: not in enabled drivers build config 00:02:01.012 net/cnxk: not in enabled drivers build config 00:02:01.012 net/cxgbe: not in enabled drivers build config 00:02:01.012 net/dpaa: not in enabled drivers build config 00:02:01.012 net/dpaa2: not in enabled drivers build config 00:02:01.012 net/e1000: not in enabled drivers build config 00:02:01.012 net/ena: not in enabled drivers build config 00:02:01.012 net/enetc: not in enabled drivers build config 00:02:01.012 net/enetfec: not in enabled drivers build config 00:02:01.012 net/enic: not in enabled drivers build config 00:02:01.012 net/failsafe: not in enabled drivers build config 00:02:01.012 net/fm10k: not in enabled drivers build config 00:02:01.012 net/gve: not in enabled drivers build config 00:02:01.012 net/hinic: not in enabled drivers build config 00:02:01.012 net/hns3: not in enabled drivers build config 00:02:01.012 net/iavf: not in enabled drivers build config 00:02:01.012 net/ice: not in enabled drivers build config 00:02:01.012 net/idpf: not in enabled drivers build config 00:02:01.012 net/igc: not in enabled drivers build config 00:02:01.012 net/ionic: not in enabled drivers build config 00:02:01.012 net/ipn3ke: not in enabled drivers build config 00:02:01.012 net/ixgbe: not in enabled drivers build config 00:02:01.012 net/kni: not in enabled drivers build config 00:02:01.012 net/liquidio: not in enabled drivers build config 00:02:01.012 net/mana: not in enabled drivers build config 00:02:01.012 net/memif: not in enabled drivers build config 00:02:01.012 net/mlx4: not in enabled drivers build config 00:02:01.012 net/mlx5: not in enabled drivers build config 00:02:01.012 net/mvneta: not in enabled drivers build config 00:02:01.012 net/mvpp2: not in enabled drivers build config 00:02:01.012 net/netvsc: not in enabled drivers build config 00:02:01.012 net/nfb: not in enabled drivers build config 00:02:01.012 net/nfp: not in enabled drivers build config 00:02:01.012 net/ngbe: not in enabled drivers build config 00:02:01.012 net/null: not in enabled drivers build config 00:02:01.012 net/octeontx: not in enabled drivers build config 00:02:01.012 net/octeon_ep: not in enabled drivers build config 00:02:01.012 net/pcap: not in enabled drivers build config 00:02:01.012 net/pfe: not in enabled drivers build config 00:02:01.012 net/qede: not in enabled drivers build config 00:02:01.012 net/ring: not in enabled drivers build config 00:02:01.012 net/sfc: not in enabled drivers build config 00:02:01.012 net/softnic: not in enabled drivers build config 00:02:01.012 net/tap: not in enabled drivers build config 00:02:01.012 net/thunderx: not in enabled drivers build config 00:02:01.012 net/txgbe: not in enabled drivers build config 00:02:01.012 net/vdev_netvsc: not in enabled drivers build config 00:02:01.012 net/vhost: not in enabled drivers build config 00:02:01.012 net/virtio: not in enabled drivers build config 00:02:01.012 net/vmxnet3: not in enabled drivers build config 00:02:01.012 raw/cnxk_bphy: not in enabled drivers build config 00:02:01.012 raw/cnxk_gpio: not in enabled drivers build config 00:02:01.012 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:01.012 raw/ifpga: not in enabled drivers build config 00:02:01.012 raw/ntb: not in enabled drivers build config 00:02:01.012 raw/skeleton: not in enabled drivers build config 00:02:01.012 crypto/armv8: not in enabled drivers build config 00:02:01.012 crypto/bcmfs: not in enabled drivers build config 00:02:01.012 crypto/caam_jr: not in enabled drivers build config 00:02:01.012 crypto/ccp: not in enabled drivers build config 00:02:01.012 crypto/cnxk: not in enabled drivers build config 00:02:01.012 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.012 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.012 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.012 crypto/mlx5: not in enabled drivers build config 00:02:01.012 crypto/mvsam: not in enabled drivers build config 00:02:01.012 crypto/nitrox: not in enabled drivers build config 00:02:01.012 crypto/null: not in enabled drivers build config 00:02:01.012 crypto/octeontx: not in enabled drivers build config 00:02:01.012 crypto/openssl: not in enabled drivers build config 00:02:01.012 crypto/scheduler: not in enabled drivers build config 00:02:01.012 crypto/uadk: not in enabled drivers build config 00:02:01.012 crypto/virtio: not in enabled drivers build config 00:02:01.012 compress/isal: not in enabled drivers build config 00:02:01.012 compress/mlx5: not in enabled drivers build config 00:02:01.012 compress/octeontx: not in enabled drivers build config 00:02:01.012 compress/zlib: not in enabled drivers build config 00:02:01.012 regex/mlx5: not in enabled drivers build config 00:02:01.012 regex/cn9k: not in enabled drivers build config 00:02:01.012 vdpa/ifc: not in enabled drivers build config 00:02:01.012 vdpa/mlx5: not in enabled drivers build config 00:02:01.012 vdpa/sfc: not in enabled drivers build config 00:02:01.012 event/cnxk: not in enabled drivers build config 00:02:01.012 event/dlb2: not in enabled drivers build config 00:02:01.012 event/dpaa: not in enabled drivers build config 00:02:01.012 event/dpaa2: not in enabled drivers build config 00:02:01.012 event/dsw: not in enabled drivers build config 00:02:01.012 event/opdl: not in enabled drivers build config 00:02:01.012 event/skeleton: not in enabled drivers build config 00:02:01.012 event/sw: not in enabled drivers build config 00:02:01.012 event/octeontx: not in enabled drivers build config 00:02:01.012 baseband/acc: not in enabled drivers build config 00:02:01.012 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:01.012 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:01.012 baseband/la12xx: not in enabled drivers build config 00:02:01.012 baseband/null: not in enabled drivers build config 00:02:01.012 baseband/turbo_sw: not in enabled drivers build config 00:02:01.012 gpu/cuda: not in enabled drivers build config 00:02:01.012 00:02:01.012 00:02:01.012 Build targets in project: 313 00:02:01.012 00:02:01.012 DPDK 22.11.4 00:02:01.012 00:02:01.012 User defined options 00:02:01.012 libdir : lib 00:02:01.012 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:01.012 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:01.012 c_link_args : 00:02:01.012 enable_docs : false 00:02:01.012 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:01.012 enable_kmods : false 00:02:01.012 machine : native 00:02:01.012 tests : false 00:02:01.012 00:02:01.012 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.012 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:01.012 10:27:27 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:01.012 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:01.012 [1/740] Generating lib/rte_telemetry_def with a custom command 00:02:01.012 [2/740] Generating lib/rte_kvargs_def with a custom command 00:02:01.012 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:01.012 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:01.012 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.012 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.012 [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.012 [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.012 [9/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.012 [10/740] Linking static target lib/librte_kvargs.a 00:02:01.012 [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.012 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.012 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.013 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.271 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.271 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.271 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.271 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.271 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.271 [20/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:01.271 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.271 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.530 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.530 [24/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.530 [25/740] Linking target lib/librte_kvargs.so.23.0 00:02:01.530 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.530 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.530 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.530 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.530 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.530 [31/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.530 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.530 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.530 [34/740] Linking static target lib/librte_telemetry.a 00:02:01.788 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.788 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.788 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.788 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.788 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.788 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.788 [41/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.788 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.788 [43/740] Linking target lib/librte_telemetry.so.23.0 00:02:02.047 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.047 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.047 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.047 [47/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:02.047 [48/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:02.047 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.047 [50/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.047 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.047 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.047 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.306 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.306 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.306 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.306 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.306 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.306 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.306 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.306 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.306 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.306 [63/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.306 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.306 [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:02.306 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.306 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.306 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.306 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:02.306 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.306 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.565 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.565 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.565 [74/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.565 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.565 [76/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.565 [77/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.565 [78/740] Generating lib/rte_eal_def with a custom command 00:02:02.565 [79/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.565 [80/740] Generating lib/rte_eal_mingw with a custom command 00:02:02.565 [81/740] Generating lib/rte_ring_def with a custom command 00:02:02.565 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:02.565 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:02.565 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:02:02.565 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.565 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.565 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.825 [88/740] Linking static target lib/librte_ring.a 00:02:02.825 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.825 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:02.825 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:02.825 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.825 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.083 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.083 [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.083 [96/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.083 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:03.083 [98/740] Linking static target lib/librte_eal.a 00:02:03.083 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.083 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:03.083 [101/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.342 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.342 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.342 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.342 [105/740] Linking static target lib/librte_rcu.a 00:02:03.600 [106/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.600 [107/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.600 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.600 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.600 [110/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.600 [111/740] Linking static target lib/librte_mempool.a 00:02:03.600 [112/740] Generating lib/rte_net_def with a custom command 00:02:03.600 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:03.600 [114/740] Generating lib/rte_meter_def with a custom command 00:02:03.600 [115/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.600 [116/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.600 [117/740] Generating lib/rte_meter_mingw with a custom command 00:02:03.600 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.857 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.857 [120/740] Linking static target lib/librte_meter.a 00:02:03.857 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.115 [122/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.115 [123/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.115 [124/740] Linking static target lib/librte_net.a 00:02:04.115 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.115 [126/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.115 [127/740] Linking static target lib/librte_mbuf.a 00:02:04.115 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.115 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.373 [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.373 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.373 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.630 [133/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.895 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.895 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.895 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.895 [137/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.895 [138/740] Generating lib/rte_ethdev_def with a custom command 00:02:04.895 [139/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:04.895 [140/740] Generating lib/rte_pci_def with a custom command 00:02:04.895 [141/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.895 [142/740] Generating lib/rte_pci_mingw with a custom command 00:02:04.895 [143/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:04.895 [144/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:05.153 [145/740] Linking static target lib/librte_pci.a 00:02:05.153 [146/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.153 [147/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:05.153 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:05.153 [149/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:05.153 [150/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.412 [151/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:05.412 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:05.412 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:05.412 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:05.412 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:05.412 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:05.412 [157/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:05.412 [158/740] Generating lib/rte_cmdline_def with a custom command 00:02:05.412 [159/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.412 [160/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:05.412 [161/740] Generating lib/rte_metrics_def with a custom command 00:02:05.412 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:02:05.670 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:05.670 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:05.670 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:05.670 [166/740] Linking static target lib/librte_cmdline.a 00:02:05.670 [167/740] Generating lib/rte_hash_def with a custom command 00:02:05.670 [168/740] Generating lib/rte_hash_mingw with a custom command 00:02:05.671 [169/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.671 [170/740] Generating lib/rte_timer_def with a custom command 00:02:05.671 [171/740] Generating lib/rte_timer_mingw with a custom command 00:02:05.671 [172/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:05.928 [173/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:05.928 [174/740] Linking static target lib/librte_metrics.a 00:02:05.928 [175/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.928 [176/740] Linking static target lib/librte_timer.a 00:02:06.190 [177/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.487 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.487 [179/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.487 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:06.487 [181/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.746 [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:06.746 [183/740] Linking static target lib/librte_ethdev.a 00:02:06.746 [184/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:06.746 [185/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:07.004 [186/740] Generating lib/rte_acl_def with a custom command 00:02:07.004 [187/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:07.005 [188/740] Generating lib/rte_acl_mingw with a custom command 00:02:07.005 [189/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:07.005 [190/740] Generating lib/rte_bbdev_def with a custom command 00:02:07.005 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:07.005 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:07.005 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:07.005 [194/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:07.571 [195/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:07.571 [196/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:07.571 [197/740] Linking static target lib/librte_bitratestats.a 00:02:07.571 [198/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:07.571 [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:07.571 [200/740] Linking static target lib/librte_bbdev.a 00:02:07.830 [201/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.088 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:08.088 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:08.088 [204/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:08.088 [205/740] Linking static target lib/librte_hash.a 00:02:08.346 [206/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.346 [207/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:08.346 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:08.346 [209/740] Generating lib/rte_bpf_def with a custom command 00:02:08.604 [210/740] Generating lib/rte_bpf_mingw with a custom command 00:02:08.604 [211/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:08.604 [212/740] Generating lib/rte_cfgfile_def with a custom command 00:02:08.604 [213/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:08.604 [214/740] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:08.604 [215/740] Linking static target lib/acl/libavx512_tmp.a 00:02:08.862 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:08.862 [217/740] Linking static target lib/librte_cfgfile.a 00:02:08.862 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:08.862 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.862 [220/740] Generating lib/rte_compressdev_def with a custom command 00:02:09.120 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:09.120 [222/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.120 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:09.120 [224/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.120 [225/740] Generating lib/rte_cryptodev_def with a custom command 00:02:09.120 [226/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:09.378 [227/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.378 [228/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:09.378 [229/740] Linking static target lib/librte_bpf.a 00:02:09.636 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.636 [231/740] Linking static target lib/librte_compressdev.a 00:02:09.636 [232/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:09.636 [233/740] Linking static target lib/librte_acl.a 00:02:09.636 [234/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.636 [235/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.636 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:09.636 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:09.894 [238/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.894 [239/740] Generating lib/rte_efd_def with a custom command 00:02:09.894 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:09.894 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.152 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:10.152 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:10.152 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:10.152 [245/740] Linking static target lib/librte_distributor.a 00:02:10.410 [246/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:10.410 [247/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.410 [248/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.410 [249/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:10.978 [250/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:10.978 [251/740] Generating lib/rte_eventdev_def with a custom command 00:02:10.978 [252/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:11.236 [253/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:11.236 [254/740] Linking static target lib/librte_efd.a 00:02:11.236 [255/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.236 [256/740] Linking static target lib/librte_cryptodev.a 00:02:11.236 [257/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:11.236 [258/740] Generating lib/rte_gpudev_def with a custom command 00:02:11.236 [259/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:11.493 [260/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.493 [261/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:11.751 [262/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:11.751 [263/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:11.751 [264/740] Linking static target lib/librte_gpudev.a 00:02:11.751 [265/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:11.751 [266/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.751 [267/740] Linking target lib/librte_eal.so.23.0 00:02:12.008 [268/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:12.008 [269/740] Generating lib/rte_gro_def with a custom command 00:02:12.008 [270/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:12.008 [271/740] Generating lib/rte_gro_mingw with a custom command 00:02:12.008 [272/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:12.008 [273/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.008 [274/740] Linking target lib/librte_ring.so.23.0 00:02:12.008 [275/740] Linking target lib/librte_meter.so.23.0 00:02:12.266 [276/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:12.266 [277/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:12.266 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:12.266 [279/740] Linking target lib/librte_rcu.so.23.0 00:02:12.266 [280/740] Linking target lib/librte_mempool.so.23.0 00:02:12.266 [281/740] Linking target lib/librte_pci.so.23.0 00:02:12.266 [282/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:12.266 [283/740] Linking target lib/librte_timer.so.23.0 00:02:12.266 [284/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:12.525 [285/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:12.525 [286/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:12.525 [287/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:12.525 [288/740] Linking static target lib/librte_gro.a 00:02:12.525 [289/740] Linking target lib/librte_acl.so.23.0 00:02:12.525 [290/740] Linking target lib/librte_cfgfile.so.23.0 00:02:12.525 [291/740] Linking target lib/librte_mbuf.so.23.0 00:02:12.525 [292/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:12.525 [293/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:12.525 [294/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:12.525 [295/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:12.525 [296/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:12.525 [297/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.525 [298/740] Linking target lib/librte_net.so.23.0 00:02:12.525 [299/740] Linking target lib/librte_bbdev.so.23.0 00:02:12.525 [300/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.525 [301/740] Linking target lib/librte_compressdev.so.23.0 00:02:12.783 [302/740] Linking static target lib/librte_eventdev.a 00:02:12.783 [303/740] Linking target lib/librte_distributor.so.23.0 00:02:12.783 [304/740] Generating lib/rte_gso_def with a custom command 00:02:12.783 [305/740] Linking target lib/librte_gpudev.so.23.0 00:02:12.783 [306/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:12.783 [307/740] Generating lib/rte_gso_mingw with a custom command 00:02:12.783 [308/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:12.783 [309/740] Linking target lib/librte_ethdev.so.23.0 00:02:12.783 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:12.783 [311/740] Linking target lib/librte_cmdline.so.23.0 00:02:13.041 [312/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:13.041 [313/740] Linking target lib/librte_hash.so.23.0 00:02:13.041 [314/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:13.041 [315/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:13.041 [316/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:13.041 [317/740] Linking target lib/librte_metrics.so.23.0 00:02:13.041 [318/740] Linking target lib/librte_bpf.so.23.0 00:02:13.041 [319/740] Linking target lib/librte_gro.so.23.0 00:02:13.041 [320/740] Linking static target lib/librte_gso.a 00:02:13.041 [321/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:13.041 [322/740] Linking target lib/librte_efd.so.23.0 00:02:13.041 [323/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:13.299 [324/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:13.299 [325/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:13.299 [326/740] Linking target lib/librte_bitratestats.so.23.0 00:02:13.299 [327/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.299 [328/740] Generating lib/rte_ip_frag_def with a custom command 00:02:13.299 [329/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:13.299 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:13.299 [331/740] Linking target lib/librte_gso.so.23.0 00:02:13.299 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:13.299 [333/740] Generating lib/rte_latencystats_def with a custom command 00:02:13.299 [334/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:13.299 [335/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:13.299 [336/740] Linking static target lib/librte_jobstats.a 00:02:13.299 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:13.557 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:13.557 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:13.557 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:13.557 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:13.557 [342/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.815 [343/740] Linking target lib/librte_jobstats.so.23.0 00:02:13.815 [344/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:13.815 [345/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:13.815 [346/740] Linking static target lib/librte_latencystats.a 00:02:13.815 [347/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:13.815 [348/740] Linking static target lib/librte_ip_frag.a 00:02:14.073 [349/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.073 [350/740] Linking target lib/librte_cryptodev.so.23.0 00:02:14.073 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:14.073 [352/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:14.073 [353/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:14.073 [354/740] Generating lib/rte_member_def with a custom command 00:02:14.073 [355/740] Generating lib/rte_member_mingw with a custom command 00:02:14.073 [356/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.073 [357/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:14.073 [358/740] Linking target lib/librte_latencystats.so.23.0 00:02:14.073 [359/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:14.073 [360/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.073 [361/740] Generating lib/rte_pcapng_def with a custom command 00:02:14.073 [362/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:14.073 [363/740] Linking target lib/librte_ip_frag.so.23.0 00:02:14.331 [364/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.331 [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.331 [366/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:14.331 [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.331 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:14.331 [369/740] Linking static target lib/librte_lpm.a 00:02:14.589 [370/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:14.589 [371/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.589 [372/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:14.589 [373/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:14.847 [374/740] Generating lib/rte_power_def with a custom command 00:02:14.847 [375/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.847 [376/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:14.847 [377/740] Generating lib/rte_power_mingw with a custom command 00:02:14.847 [378/740] Linking target lib/librte_lpm.so.23.0 00:02:14.847 [379/740] Generating lib/rte_rawdev_def with a custom command 00:02:14.847 [380/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:14.847 [381/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:14.847 [382/740] Linking static target lib/librte_pcapng.a 00:02:14.847 [383/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:14.847 [384/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:14.847 [385/740] Generating lib/rte_regexdev_def with a custom command 00:02:14.847 [386/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:14.847 [387/740] Generating lib/rte_dmadev_def with a custom command 00:02:15.104 [388/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:15.104 [389/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:15.104 [390/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.104 [391/740] Generating lib/rte_rib_def with a custom command 00:02:15.104 [392/740] Generating lib/rte_rib_mingw with a custom command 00:02:15.104 [393/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:15.104 [394/740] Linking static target lib/librte_rawdev.a 00:02:15.104 [395/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.104 [396/740] Linking target lib/librte_pcapng.so.23.0 00:02:15.362 [397/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:15.362 [398/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.362 [399/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.362 [400/740] Generating lib/rte_reorder_def with a custom command 00:02:15.362 [401/740] Linking static target lib/librte_dmadev.a 00:02:15.362 [402/740] Linking static target lib/librte_power.a 00:02:15.362 [403/740] Generating lib/rte_reorder_mingw with a custom command 00:02:15.362 [404/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:15.362 [405/740] Linking static target lib/librte_regexdev.a 00:02:15.619 [406/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.619 [407/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:15.619 [408/740] Linking target lib/librte_eventdev.so.23.0 00:02:15.619 [409/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.619 [410/740] Linking target lib/librte_rawdev.so.23.0 00:02:15.619 [411/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:15.619 [412/740] Linking static target lib/librte_member.a 00:02:15.619 [413/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:15.876 [414/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:15.876 [415/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.876 [416/740] Linking static target lib/librte_reorder.a 00:02:15.876 [417/740] Generating lib/rte_sched_def with a custom command 00:02:15.876 [418/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:15.876 [419/740] Generating lib/rte_sched_mingw with a custom command 00:02:15.876 [420/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:15.877 [421/740] Generating lib/rte_security_def with a custom command 00:02:15.877 [422/740] Generating lib/rte_security_mingw with a custom command 00:02:15.877 [423/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.877 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:15.877 [425/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:16.133 [426/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.133 [427/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:16.133 [428/740] Linking static target lib/librte_rib.a 00:02:16.133 [429/740] Generating lib/rte_stack_def with a custom command 00:02:16.133 [430/740] Linking target lib/librte_dmadev.so.23.0 00:02:16.133 [431/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.133 [432/740] Linking target lib/librte_reorder.so.23.0 00:02:16.133 [433/740] Generating lib/rte_stack_mingw with a custom command 00:02:16.133 [434/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:16.133 [435/740] Linking static target lib/librte_stack.a 00:02:16.133 [436/740] Linking target lib/librte_member.so.23.0 00:02:16.133 [437/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:16.133 [438/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.133 [439/740] Linking target lib/librte_regexdev.so.23.0 00:02:16.133 [440/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.391 [441/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.391 [442/740] Linking target lib/librte_stack.so.23.0 00:02:16.391 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.391 [444/740] Linking target lib/librte_power.so.23.0 00:02:16.391 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.391 [446/740] Linking static target lib/librte_security.a 00:02:16.391 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.649 [448/740] Linking target lib/librte_rib.so.23.0 00:02:16.649 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:16.649 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.649 [451/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.649 [452/740] Generating lib/rte_vhost_def with a custom command 00:02:16.649 [453/740] Generating lib/rte_vhost_mingw with a custom command 00:02:16.905 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.905 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.905 [456/740] Linking target lib/librte_security.so.23.0 00:02:16.906 [457/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:17.163 [458/740] Linking static target lib/librte_sched.a 00:02:17.163 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:17.420 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:17.420 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:17.420 [462/740] Generating lib/rte_ipsec_def with a custom command 00:02:17.420 [463/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:17.420 [464/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.678 [465/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.678 [466/740] Linking target lib/librte_sched.so.23.0 00:02:17.678 [467/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.678 [468/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:17.941 [469/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:17.941 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:17.941 [471/740] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:17.941 [472/740] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:17.941 [473/740] Generating lib/rte_fib_def with a custom command 00:02:17.941 [474/740] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:17.941 [475/740] Generating lib/rte_fib_mingw with a custom command 00:02:17.941 [476/740] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:17.941 [477/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:18.213 [478/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:18.471 [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:18.471 [480/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:18.471 [481/740] Linking static target lib/librte_ipsec.a 00:02:18.728 [482/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:18.728 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:18.728 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:18.728 [485/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:18.728 [486/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:18.728 [487/740] Linking static target lib/librte_fib.a 00:02:18.986 [488/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.986 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:18.986 [490/740] Linking target lib/librte_ipsec.so.23.0 00:02:19.244 [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.244 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:19.244 [493/740] Linking target lib/librte_fib.so.23.0 00:02:19.500 [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:19.500 [495/740] Generating lib/rte_port_def with a custom command 00:02:19.500 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:19.501 [497/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:19.501 [498/740] Generating lib/rte_pdump_def with a custom command 00:02:19.757 [499/740] Generating lib/rte_pdump_mingw with a custom command 00:02:19.757 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:19.757 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:19.757 [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:19.757 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:19.757 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:20.014 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:20.014 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:20.014 [507/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:20.271 [508/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:20.271 [509/740] Linking static target lib/librte_port.a 00:02:20.271 [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:20.271 [511/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:20.532 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:20.532 [513/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:20.532 [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:20.799 [515/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:20.799 [516/740] Linking static target lib/librte_pdump.a 00:02:21.057 [517/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.057 [518/740] Linking target lib/librte_port.so.23.0 00:02:21.057 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:21.057 [520/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.057 [521/740] Linking target lib/librte_pdump.so.23.0 00:02:21.057 [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:21.057 [523/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:21.057 [524/740] Generating lib/rte_table_def with a custom command 00:02:21.314 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:21.314 [526/740] Generating lib/rte_table_mingw with a custom command 00:02:21.314 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:21.572 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:21.572 [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:21.572 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:21.572 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:21.572 [532/740] Linking static target lib/librte_table.a 00:02:21.572 [533/740] Generating lib/rte_pipeline_def with a custom command 00:02:21.572 [534/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:21.829 [535/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:22.087 [536/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:22.087 [537/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:22.345 [538/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.345 [539/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.345 [540/740] Linking target lib/librte_table.so.23.0 00:02:22.345 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:22.345 [542/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:22.602 [543/740] Generating lib/rte_graph_def with a custom command 00:02:22.602 [544/740] Generating lib/rte_graph_mingw with a custom command 00:02:22.602 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:22.602 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:22.859 [547/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:22.859 [548/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:22.860 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:22.860 [550/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:23.117 [551/740] Linking static target lib/librte_graph.a 00:02:23.117 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:23.117 [553/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:23.117 [554/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:23.375 [555/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:23.375 [556/740] Generating lib/rte_node_def with a custom command 00:02:23.375 [557/740] Generating lib/rte_node_mingw with a custom command 00:02:23.632 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:23.918 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.919 [560/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.919 [561/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:23.919 [562/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:23.919 [563/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:23.919 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:23.919 [565/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.919 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:24.176 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.176 [568/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.176 [569/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.176 [570/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:24.176 [571/740] Linking target lib/librte_graph.so.23.0 00:02:24.176 [572/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:24.176 [573/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:24.176 [574/740] Linking static target lib/librte_node.a 00:02:24.176 [575/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:24.176 [576/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:24.176 [577/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.176 [578/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.176 [579/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.176 [580/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:24.434 [581/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.434 [582/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.434 [583/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.434 [584/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.434 [585/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.692 [586/740] Linking static target drivers/librte_bus_vdev.a 00:02:24.692 [587/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.692 [588/740] Linking target lib/librte_node.so.23.0 00:02:24.692 [589/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.692 [590/740] Linking static target drivers/librte_bus_pci.a 00:02:24.692 [591/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.692 [592/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.949 [593/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.949 [594/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:24.949 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:24.949 [596/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:25.207 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:25.207 [598/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.207 [599/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:25.207 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:25.207 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:25.464 [602/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.464 [603/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.464 [604/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:25.464 [605/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.464 [606/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.464 [607/740] Linking static target drivers/librte_mempool_ring.a 00:02:25.464 [608/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.721 [609/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:25.721 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:25.978 [611/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:26.236 [612/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:26.236 [613/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:26.801 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:27.058 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:27.058 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:27.058 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:27.316 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:27.316 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:27.574 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:27.831 [621/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:27.831 [622/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:27.831 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:27.831 [624/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:28.762 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:28.762 [626/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:28.762 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:28.762 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:28.762 [629/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:28.762 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:29.020 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:29.020 [632/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:29.585 [633/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:29.585 [634/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:29.842 [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:29.842 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:29.842 [637/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:30.100 [638/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:30.100 [639/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:30.358 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:30.358 [641/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:30.358 [642/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:30.358 [643/740] Linking static target drivers/librte_net_i40e.a 00:02:30.358 [644/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:30.358 [645/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:30.616 [646/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:30.616 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:30.874 [648/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:30.874 [649/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:30.874 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:31.131 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:31.389 [652/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.389 [653/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:31.389 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:31.699 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:31.699 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:31.699 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:31.699 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:31.958 [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:31.958 [660/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:31.958 [661/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:31.958 [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:32.217 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:32.217 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:32.475 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:32.475 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:32.733 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:32.991 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:32.991 [669/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.991 [670/740] Linking static target lib/librte_vhost.a 00:02:32.991 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:33.249 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:33.507 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:33.507 [674/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:33.765 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:33.765 [676/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:33.765 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:34.023 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:34.023 [679/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:34.023 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:34.023 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:34.281 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:34.281 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:34.539 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:34.539 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:34.539 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:34.539 [687/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.798 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:34.798 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:34.798 [690/740] Linking target lib/librte_vhost.so.23.0 00:02:34.798 [691/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:35.056 [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:35.056 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:35.314 [694/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:35.314 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:35.572 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:35.572 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:35.830 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:35.830 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:35.830 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:36.089 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:36.347 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:36.605 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:36.605 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:36.862 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:36.862 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:36.862 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:36.862 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:37.428 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:37.428 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:37.686 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:37.686 [712/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:37.686 [713/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:37.951 [714/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:37.951 [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:37.951 [716/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:37.951 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:38.209 [718/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:38.776 [719/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:38.776 [720/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:38.776 [721/740] Linking static target lib/librte_pipeline.a 00:02:38.776 [722/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:39.034 [723/740] Linking target app/dpdk-test-eventdev 00:02:39.034 [724/740] Linking target app/dpdk-proc-info 00:02:39.034 [725/740] Linking target app/dpdk-pdump 00:02:39.034 [726/740] Linking target app/dpdk-test-compress-perf 00:02:39.034 [727/740] Linking target app/dpdk-test-crypto-perf 00:02:39.292 [728/740] Linking target app/dpdk-test-acl 00:02:39.292 [729/740] Linking target app/dpdk-test-bbdev 00:02:39.292 [730/740] Linking target app/dpdk-test-fib 00:02:39.292 [731/740] Linking target app/dpdk-test-cmdline 00:02:39.550 [732/740] Linking target app/dpdk-test-gpudev 00:02:39.550 [733/740] Linking target app/dpdk-test-flow-perf 00:02:39.550 [734/740] Linking target app/dpdk-test-regex 00:02:39.550 [735/740] Linking target app/dpdk-test-pipeline 00:02:39.550 [736/740] Linking target app/dpdk-test-sad 00:02:39.808 [737/740] Linking target app/dpdk-test-security-perf 00:02:39.808 [738/740] Linking target app/dpdk-testpmd 00:02:42.341 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.341 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:42.341 10:28:08 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:42.341 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:42.341 [0/1] Installing files. 00:02:42.912 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:42.912 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.913 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:42.914 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.915 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:42.916 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:42.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:42.917 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:42.917 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:42.917 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.176 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.177 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.177 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.177 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.177 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:43.177 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.177 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.438 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.438 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.438 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.438 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.438 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.438 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.439 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.440 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.441 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:43.442 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:43.442 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:43.442 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:43.442 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:43.442 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:43.442 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:43.442 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:43.442 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:43.442 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:43.442 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:43.442 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:43.442 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:43.442 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:43.442 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:43.442 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:43.442 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:43.442 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:43.442 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:43.442 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:43.442 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:43.442 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:43.442 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:43.442 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:43.442 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:43.442 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:43.442 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:43.442 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:43.442 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:43.442 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:43.442 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:43.442 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:43.442 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:43.442 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:43.442 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:43.442 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:43.442 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:43.442 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:43.442 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:43.442 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:43.442 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:43.442 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:43.442 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:43.442 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:43.442 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:43.442 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:43.442 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:43.442 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:43.442 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:43.442 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:43.442 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:43.442 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:43.442 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:43.442 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:43.442 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:43.442 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:43.442 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:43.442 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:43.442 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:43.442 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:43.442 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:43.442 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:43.442 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:43.442 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:43.442 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:43.442 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:43.442 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:43.442 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:43.442 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:43.442 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:43.442 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:43.442 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:43.442 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:43.442 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:43.442 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:43.442 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:43.442 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:43.442 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:43.442 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:43.442 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:43.442 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:43.442 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:43.442 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:43.442 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:43.442 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:43.442 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:43.442 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:43.442 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:43.442 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:43.443 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:43.443 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:43.443 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:43.443 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:43.443 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:43.443 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:43.443 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:43.443 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:43.443 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:43.443 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:43.443 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:43.443 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:43.443 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:43.443 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:43.443 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:43.443 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:43.443 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:43.443 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:43.443 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:43.443 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:43.443 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:43.443 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:43.443 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:43.443 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:43.443 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:43.443 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:43.443 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:43.443 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:43.443 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:43.443 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:43.443 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:43.443 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:43.443 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:43.443 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:43.443 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:43.443 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:43.443 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:43.443 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:43.443 10:28:09 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:43.443 10:28:09 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:43.443 10:28:09 -- common/autobuild_common.sh@203 -- $ cat 00:02:43.443 10:28:09 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:43.443 00:02:43.443 real 0m49.428s 00:02:43.443 user 5m28.793s 00:02:43.443 sys 0m51.434s 00:02:43.443 10:28:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:43.443 10:28:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.443 ************************************ 00:02:43.443 END TEST build_native_dpdk 00:02:43.443 ************************************ 00:02:43.443 10:28:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:43.443 10:28:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:43.443 10:28:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:43.443 10:28:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:43.443 10:28:10 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:43.443 10:28:10 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:43.443 10:28:10 -- common/autobuild_common.sh@414 -- $ run_test unittest_build _unittest_build 00:02:43.443 10:28:10 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:43.443 10:28:10 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:43.443 10:28:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:43.443 ************************************ 00:02:43.443 START TEST unittest_build 00:02:43.443 ************************************ 00:02:43.443 10:28:10 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:02:43.443 10:28:10 -- common/autobuild_common.sh@405 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:43.702 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:43.702 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:43.702 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:43.702 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:43.973 Using 'verbs' RDMA provider 00:02:56.462 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:11.343 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:11.343 Creating mk/config.mk...done. 00:03:11.343 Creating mk/cc.flags.mk...done. 00:03:11.343 Type 'make' to build. 00:03:11.343 10:28:35 -- common/autobuild_common.sh@406 -- $ make -j10 00:03:11.343 make[1]: Nothing to be done for 'all'. 00:03:29.441 CC lib/ut/ut.o 00:03:29.441 CC lib/ut_mock/mock.o 00:03:29.441 CC lib/log/log.o 00:03:29.441 CC lib/log/log_flags.o 00:03:29.441 CC lib/log/log_deprecated.o 00:03:29.441 LIB libspdk_ut_mock.a 00:03:29.441 LIB libspdk_log.a 00:03:29.441 LIB libspdk_ut.a 00:03:29.441 CC lib/util/base64.o 00:03:29.441 CC lib/dma/dma.o 00:03:29.441 CC lib/util/bit_array.o 00:03:29.441 CC lib/util/crc16.o 00:03:29.441 CC lib/util/cpuset.o 00:03:29.441 CC lib/ioat/ioat.o 00:03:29.441 CXX lib/trace_parser/trace.o 00:03:29.441 CC lib/util/crc32.o 00:03:29.441 CC lib/util/crc32c.o 00:03:29.441 CC lib/vfio_user/host/vfio_user_pci.o 00:03:29.441 CC lib/util/crc32_ieee.o 00:03:29.441 CC lib/util/crc64.o 00:03:29.441 CC lib/util/dif.o 00:03:29.441 CC lib/util/fd.o 00:03:29.441 LIB libspdk_dma.a 00:03:29.441 CC lib/vfio_user/host/vfio_user.o 00:03:29.441 CC lib/util/file.o 00:03:29.441 CC lib/util/hexlify.o 00:03:29.441 CC lib/util/iov.o 00:03:29.441 LIB libspdk_ioat.a 00:03:29.441 CC lib/util/math.o 00:03:29.441 CC lib/util/pipe.o 00:03:29.441 CC lib/util/strerror_tls.o 00:03:29.441 CC lib/util/string.o 00:03:29.441 CC lib/util/uuid.o 00:03:29.441 CC lib/util/fd_group.o 00:03:29.441 LIB libspdk_vfio_user.a 00:03:29.441 CC lib/util/xor.o 00:03:29.441 CC lib/util/zipf.o 00:03:29.699 LIB libspdk_util.a 00:03:29.957 CC lib/rdma/common.o 00:03:29.957 CC lib/conf/conf.o 00:03:29.957 CC lib/json/json_parse.o 00:03:29.957 CC lib/json/json_util.o 00:03:29.957 CC lib/json/json_write.o 00:03:29.957 CC lib/env_dpdk/env.o 00:03:29.957 CC lib/idxd/idxd.o 00:03:29.957 CC lib/env_dpdk/memory.o 00:03:29.957 CC lib/vmd/vmd.o 00:03:29.957 LIB libspdk_trace_parser.a 00:03:29.957 CC lib/vmd/led.o 00:03:30.215 LIB libspdk_conf.a 00:03:30.215 CC lib/idxd/idxd_user.o 00:03:30.215 CC lib/env_dpdk/pci.o 00:03:30.215 CC lib/rdma/rdma_verbs.o 00:03:30.215 CC lib/env_dpdk/init.o 00:03:30.215 CC lib/env_dpdk/threads.o 00:03:30.215 LIB libspdk_json.a 00:03:30.215 CC lib/env_dpdk/pci_ioat.o 00:03:30.473 CC lib/jsonrpc/jsonrpc_server.o 00:03:30.473 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:30.473 CC lib/jsonrpc/jsonrpc_client.o 00:03:30.473 LIB libspdk_rdma.a 00:03:30.473 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:30.473 CC lib/env_dpdk/pci_virtio.o 00:03:30.731 LIB libspdk_idxd.a 00:03:30.731 CC lib/env_dpdk/pci_vmd.o 00:03:30.731 CC lib/env_dpdk/pci_idxd.o 00:03:30.731 CC lib/env_dpdk/pci_event.o 00:03:30.731 CC lib/env_dpdk/sigbus_handler.o 00:03:30.731 CC lib/env_dpdk/pci_dpdk.o 00:03:30.731 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:30.731 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:30.731 LIB libspdk_jsonrpc.a 00:03:30.731 LIB libspdk_vmd.a 00:03:30.731 CC lib/rpc/rpc.o 00:03:30.990 LIB libspdk_rpc.a 00:03:31.248 CC lib/notify/notify.o 00:03:31.248 CC lib/notify/notify_rpc.o 00:03:31.248 CC lib/sock/sock.o 00:03:31.248 CC lib/sock/sock_rpc.o 00:03:31.248 CC lib/trace/trace.o 00:03:31.248 CC lib/trace/trace_flags.o 00:03:31.248 CC lib/trace/trace_rpc.o 00:03:31.248 LIB libspdk_notify.a 00:03:31.507 LIB libspdk_trace.a 00:03:31.507 LIB libspdk_env_dpdk.a 00:03:31.507 CC lib/thread/thread.o 00:03:31.507 CC lib/thread/iobuf.o 00:03:31.507 LIB libspdk_sock.a 00:03:31.765 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:31.765 CC lib/nvme/nvme_ctrlr.o 00:03:31.765 CC lib/nvme/nvme_fabric.o 00:03:31.765 CC lib/nvme/nvme_ns_cmd.o 00:03:31.765 CC lib/nvme/nvme_pcie_common.o 00:03:31.765 CC lib/nvme/nvme_ns.o 00:03:31.765 CC lib/nvme/nvme_pcie.o 00:03:31.765 CC lib/nvme/nvme_qpair.o 00:03:32.023 CC lib/nvme/nvme.o 00:03:32.285 CC lib/nvme/nvme_quirks.o 00:03:32.285 CC lib/nvme/nvme_transport.o 00:03:32.544 CC lib/nvme/nvme_discovery.o 00:03:32.544 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:32.544 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:32.544 CC lib/nvme/nvme_tcp.o 00:03:32.802 CC lib/nvme/nvme_opal.o 00:03:32.802 CC lib/nvme/nvme_io_msg.o 00:03:33.060 CC lib/nvme/nvme_poll_group.o 00:03:33.060 CC lib/nvme/nvme_zns.o 00:03:33.060 CC lib/nvme/nvme_cuse.o 00:03:33.060 CC lib/nvme/nvme_vfio_user.o 00:03:33.060 CC lib/nvme/nvme_rdma.o 00:03:33.319 LIB libspdk_thread.a 00:03:33.578 CC lib/accel/accel.o 00:03:33.578 CC lib/blob/blobstore.o 00:03:33.578 CC lib/blob/request.o 00:03:33.578 CC lib/init/json_config.o 00:03:33.578 CC lib/virtio/virtio.o 00:03:33.578 CC lib/virtio/virtio_vhost_user.o 00:03:33.836 CC lib/init/subsystem.o 00:03:33.836 CC lib/init/subsystem_rpc.o 00:03:33.836 CC lib/blob/zeroes.o 00:03:33.837 CC lib/blob/blob_bs_dev.o 00:03:33.837 CC lib/init/rpc.o 00:03:33.837 CC lib/virtio/virtio_vfio_user.o 00:03:33.837 CC lib/virtio/virtio_pci.o 00:03:34.095 CC lib/accel/accel_rpc.o 00:03:34.095 LIB libspdk_init.a 00:03:34.095 CC lib/accel/accel_sw.o 00:03:34.095 CC lib/event/app.o 00:03:34.095 CC lib/event/reactor.o 00:03:34.095 CC lib/event/log_rpc.o 00:03:34.095 CC lib/event/app_rpc.o 00:03:34.354 LIB libspdk_virtio.a 00:03:34.354 CC lib/event/scheduler_static.o 00:03:34.619 LIB libspdk_nvme.a 00:03:34.619 LIB libspdk_accel.a 00:03:34.619 LIB libspdk_event.a 00:03:34.883 CC lib/bdev/bdev.o 00:03:34.883 CC lib/bdev/bdev_rpc.o 00:03:34.883 CC lib/bdev/bdev_zone.o 00:03:34.883 CC lib/bdev/part.o 00:03:34.883 CC lib/bdev/scsi_nvme.o 00:03:37.443 LIB libspdk_blob.a 00:03:37.443 CC lib/blobfs/blobfs.o 00:03:37.443 CC lib/blobfs/tree.o 00:03:37.443 CC lib/lvol/lvol.o 00:03:37.701 LIB libspdk_bdev.a 00:03:37.960 CC lib/scsi/dev.o 00:03:37.960 CC lib/ftl/ftl_core.o 00:03:37.960 CC lib/scsi/lun.o 00:03:37.960 CC lib/ftl/ftl_init.o 00:03:37.960 CC lib/scsi/port.o 00:03:37.960 CC lib/ftl/ftl_layout.o 00:03:37.960 CC lib/nvmf/ctrlr.o 00:03:37.960 CC lib/nbd/nbd.o 00:03:38.218 CC lib/nvmf/ctrlr_discovery.o 00:03:38.219 CC lib/nvmf/ctrlr_bdev.o 00:03:38.219 CC lib/nvmf/subsystem.o 00:03:38.477 LIB libspdk_blobfs.a 00:03:38.477 CC lib/nvmf/nvmf.o 00:03:38.477 CC lib/scsi/scsi.o 00:03:38.477 CC lib/scsi/scsi_bdev.o 00:03:38.477 CC lib/ftl/ftl_debug.o 00:03:38.477 CC lib/nbd/nbd_rpc.o 00:03:38.477 LIB libspdk_lvol.a 00:03:38.733 CC lib/nvmf/nvmf_rpc.o 00:03:38.733 CC lib/nvmf/transport.o 00:03:38.733 CC lib/ftl/ftl_io.o 00:03:38.733 LIB libspdk_nbd.a 00:03:38.733 CC lib/ftl/ftl_sb.o 00:03:38.733 CC lib/scsi/scsi_pr.o 00:03:38.990 CC lib/ftl/ftl_l2p.o 00:03:38.990 CC lib/nvmf/tcp.o 00:03:38.990 CC lib/nvmf/rdma.o 00:03:38.990 CC lib/scsi/scsi_rpc.o 00:03:39.248 CC lib/ftl/ftl_l2p_flat.o 00:03:39.248 CC lib/ftl/ftl_nv_cache.o 00:03:39.248 CC lib/scsi/task.o 00:03:39.248 CC lib/ftl/ftl_band.o 00:03:39.248 CC lib/ftl/ftl_band_ops.o 00:03:39.505 CC lib/ftl/ftl_writer.o 00:03:39.505 LIB libspdk_scsi.a 00:03:39.505 CC lib/ftl/ftl_rq.o 00:03:39.505 CC lib/ftl/ftl_reloc.o 00:03:39.505 CC lib/iscsi/conn.o 00:03:39.763 CC lib/ftl/ftl_l2p_cache.o 00:03:39.763 CC lib/ftl/ftl_p2l.o 00:03:39.763 CC lib/ftl/mngt/ftl_mngt.o 00:03:39.763 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:40.020 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:40.020 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:40.020 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:40.020 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:40.020 CC lib/iscsi/init_grp.o 00:03:40.020 CC lib/iscsi/iscsi.o 00:03:40.020 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:40.277 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:40.277 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:40.277 CC lib/iscsi/md5.o 00:03:40.277 CC lib/iscsi/param.o 00:03:40.277 CC lib/vhost/vhost.o 00:03:40.277 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:40.277 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:40.277 CC lib/vhost/vhost_rpc.o 00:03:40.535 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:40.535 CC lib/vhost/vhost_scsi.o 00:03:40.535 CC lib/vhost/vhost_blk.o 00:03:40.535 CC lib/vhost/rte_vhost_user.o 00:03:40.792 CC lib/iscsi/portal_grp.o 00:03:40.792 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:40.792 CC lib/ftl/utils/ftl_conf.o 00:03:40.792 CC lib/ftl/utils/ftl_md.o 00:03:41.049 CC lib/ftl/utils/ftl_mempool.o 00:03:41.049 CC lib/ftl/utils/ftl_bitmap.o 00:03:41.049 CC lib/iscsi/tgt_node.o 00:03:41.049 CC lib/ftl/utils/ftl_property.o 00:03:41.049 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:41.049 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:41.307 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:41.307 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:41.307 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:41.307 CC lib/iscsi/iscsi_subsystem.o 00:03:41.307 CC lib/iscsi/iscsi_rpc.o 00:03:41.565 CC lib/iscsi/task.o 00:03:41.565 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:41.565 LIB libspdk_nvmf.a 00:03:41.565 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:41.565 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:41.565 LIB libspdk_vhost.a 00:03:41.565 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:41.565 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:41.565 CC lib/ftl/base/ftl_base_dev.o 00:03:41.565 CC lib/ftl/base/ftl_base_bdev.o 00:03:41.565 CC lib/ftl/ftl_trace.o 00:03:41.823 LIB libspdk_iscsi.a 00:03:42.079 LIB libspdk_ftl.a 00:03:42.336 CC module/env_dpdk/env_dpdk_rpc.o 00:03:42.336 CC module/accel/iaa/accel_iaa.o 00:03:42.336 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:42.336 CC module/accel/dsa/accel_dsa.o 00:03:42.336 CC module/accel/error/accel_error.o 00:03:42.336 CC module/scheduler/gscheduler/gscheduler.o 00:03:42.336 CC module/blob/bdev/blob_bdev.o 00:03:42.336 CC module/sock/posix/posix.o 00:03:42.336 CC module/accel/ioat/accel_ioat.o 00:03:42.336 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:42.336 LIB libspdk_env_dpdk_rpc.a 00:03:42.593 CC module/accel/dsa/accel_dsa_rpc.o 00:03:42.593 LIB libspdk_scheduler_dpdk_governor.a 00:03:42.593 CC module/accel/error/accel_error_rpc.o 00:03:42.593 LIB libspdk_scheduler_gscheduler.a 00:03:42.593 CC module/accel/iaa/accel_iaa_rpc.o 00:03:42.593 LIB libspdk_scheduler_dynamic.a 00:03:42.593 CC module/accel/ioat/accel_ioat_rpc.o 00:03:42.593 LIB libspdk_accel_dsa.a 00:03:42.593 LIB libspdk_blob_bdev.a 00:03:42.593 LIB libspdk_accel_error.a 00:03:42.850 LIB libspdk_accel_iaa.a 00:03:42.850 LIB libspdk_accel_ioat.a 00:03:42.850 CC module/bdev/delay/vbdev_delay.o 00:03:42.850 CC module/bdev/gpt/gpt.o 00:03:42.850 CC module/bdev/error/vbdev_error.o 00:03:42.850 CC module/blobfs/bdev/blobfs_bdev.o 00:03:42.850 CC module/bdev/null/bdev_null.o 00:03:42.850 CC module/bdev/lvol/vbdev_lvol.o 00:03:42.850 CC module/bdev/malloc/bdev_malloc.o 00:03:42.850 CC module/bdev/nvme/bdev_nvme.o 00:03:42.850 CC module/bdev/passthru/vbdev_passthru.o 00:03:43.107 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:43.107 CC module/bdev/gpt/vbdev_gpt.o 00:03:43.107 CC module/bdev/null/bdev_null_rpc.o 00:03:43.107 CC module/bdev/error/vbdev_error_rpc.o 00:03:43.107 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:43.364 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:43.364 LIB libspdk_sock_posix.a 00:03:43.364 LIB libspdk_blobfs_bdev.a 00:03:43.364 LIB libspdk_bdev_null.a 00:03:43.364 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:43.364 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:43.364 CC module/bdev/nvme/nvme_rpc.o 00:03:43.364 LIB libspdk_bdev_gpt.a 00:03:43.364 LIB libspdk_bdev_error.a 00:03:43.364 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:43.364 LIB libspdk_bdev_delay.a 00:03:43.364 LIB libspdk_bdev_malloc.a 00:03:43.364 CC module/bdev/raid/bdev_raid.o 00:03:43.364 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:43.364 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:43.364 CC module/bdev/split/vbdev_split.o 00:03:43.364 LIB libspdk_bdev_passthru.a 00:03:43.622 CC module/bdev/aio/bdev_aio.o 00:03:43.622 CC module/bdev/aio/bdev_aio_rpc.o 00:03:43.622 CC module/bdev/ftl/bdev_ftl.o 00:03:43.622 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:43.622 LIB libspdk_bdev_lvol.a 00:03:43.622 CC module/bdev/split/vbdev_split_rpc.o 00:03:43.879 CC module/bdev/nvme/bdev_mdns_client.o 00:03:43.879 LIB libspdk_bdev_zone_block.a 00:03:43.879 CC module/bdev/iscsi/bdev_iscsi.o 00:03:43.879 CC module/bdev/nvme/vbdev_opal.o 00:03:43.879 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:43.879 LIB libspdk_bdev_ftl.a 00:03:43.879 LIB libspdk_bdev_aio.a 00:03:43.879 LIB libspdk_bdev_split.a 00:03:43.879 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:43.879 CC module/bdev/raid/bdev_raid_rpc.o 00:03:43.879 CC module/bdev/raid/bdev_raid_sb.o 00:03:44.136 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:44.136 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:44.136 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:44.136 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:44.136 CC module/bdev/raid/raid0.o 00:03:44.136 CC module/bdev/raid/raid1.o 00:03:44.136 CC module/bdev/raid/concat.o 00:03:44.136 CC module/bdev/raid/raid5f.o 00:03:44.136 LIB libspdk_bdev_iscsi.a 00:03:44.702 LIB libspdk_bdev_virtio.a 00:03:44.702 LIB libspdk_bdev_raid.a 00:03:45.635 LIB libspdk_bdev_nvme.a 00:03:45.893 CC module/event/subsystems/sock/sock.o 00:03:45.893 CC module/event/subsystems/vmd/vmd.o 00:03:45.893 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:45.893 CC module/event/subsystems/iobuf/iobuf.o 00:03:45.893 CC module/event/subsystems/scheduler/scheduler.o 00:03:45.893 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:45.893 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:45.893 LIB libspdk_event_sock.a 00:03:45.893 LIB libspdk_event_vhost_blk.a 00:03:45.893 LIB libspdk_event_vmd.a 00:03:45.893 LIB libspdk_event_iobuf.a 00:03:45.893 LIB libspdk_event_scheduler.a 00:03:46.151 CC module/event/subsystems/accel/accel.o 00:03:46.151 LIB libspdk_event_accel.a 00:03:46.410 CC module/event/subsystems/bdev/bdev.o 00:03:46.668 LIB libspdk_event_bdev.a 00:03:46.668 CC module/event/subsystems/scsi/scsi.o 00:03:46.668 CC module/event/subsystems/nbd/nbd.o 00:03:46.668 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:46.668 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:46.924 LIB libspdk_event_nbd.a 00:03:46.924 LIB libspdk_event_scsi.a 00:03:47.181 CC module/event/subsystems/iscsi/iscsi.o 00:03:47.181 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:47.181 LIB libspdk_event_nvmf.a 00:03:47.181 LIB libspdk_event_vhost_scsi.a 00:03:47.181 LIB libspdk_event_iscsi.a 00:03:47.439 CXX app/trace/trace.o 00:03:47.439 CC examples/sock/hello_world/hello_sock.o 00:03:47.439 CC examples/vmd/lsvmd/lsvmd.o 00:03:47.439 CC examples/ioat/perf/perf.o 00:03:47.439 CC examples/accel/perf/accel_perf.o 00:03:47.439 CC examples/nvme/hello_world/hello_world.o 00:03:47.439 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.439 CC examples/blob/hello_world/hello_blob.o 00:03:47.439 CC examples/nvmf/nvmf/nvmf.o 00:03:47.439 CC test/accel/dif/dif.o 00:03:47.698 LINK lsvmd 00:03:47.698 LINK ioat_perf 00:03:47.698 LINK hello_sock 00:03:47.698 LINK hello_blob 00:03:47.698 LINK hello_world 00:03:47.698 LINK hello_bdev 00:03:47.698 LINK spdk_trace 00:03:47.957 LINK nvmf 00:03:47.957 LINK dif 00:03:47.957 LINK accel_perf 00:03:48.215 CC app/trace_record/trace_record.o 00:03:48.215 CC examples/ioat/verify/verify.o 00:03:48.473 LINK spdk_trace_record 00:03:48.473 LINK verify 00:03:48.473 CC app/nvmf_tgt/nvmf_main.o 00:03:48.732 LINK nvmf_tgt 00:03:48.732 CC examples/vmd/led/led.o 00:03:48.732 CC examples/nvme/reconnect/reconnect.o 00:03:48.989 LINK led 00:03:48.989 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:48.989 CC examples/nvme/arbitration/arbitration.o 00:03:48.989 CC examples/nvme/hotplug/hotplug.o 00:03:49.248 LINK reconnect 00:03:49.248 LINK hotplug 00:03:49.248 LINK arbitration 00:03:49.506 LINK nvme_manage 00:03:50.072 CC examples/bdev/bdevperf/bdevperf.o 00:03:50.330 CC test/app/bdev_svc/bdev_svc.o 00:03:50.330 CC examples/blob/cli/blobcli.o 00:03:50.588 LINK bdev_svc 00:03:50.588 CC examples/util/zipf/zipf.o 00:03:50.588 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:50.846 LINK zipf 00:03:50.846 LINK cmb_copy 00:03:50.846 CC examples/thread/thread/thread_ex.o 00:03:50.846 LINK bdevperf 00:03:50.846 LINK blobcli 00:03:51.109 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:51.109 LINK thread 00:03:51.109 CC test/app/histogram_perf/histogram_perf.o 00:03:51.109 CC test/app/jsoncat/jsoncat.o 00:03:51.368 LINK histogram_perf 00:03:51.368 CC test/app/stub/stub.o 00:03:51.368 LINK jsoncat 00:03:51.368 LINK nvme_fuzz 00:03:51.626 LINK stub 00:03:51.884 CC examples/nvme/abort/abort.o 00:03:51.884 CC app/iscsi_tgt/iscsi_tgt.o 00:03:52.142 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:52.142 CC app/spdk_tgt/spdk_tgt.o 00:03:52.142 LINK iscsi_tgt 00:03:52.142 LINK pmr_persistence 00:03:52.400 LINK spdk_tgt 00:03:52.400 LINK abort 00:03:52.658 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.916 CC test/bdev/bdevio/bdevio.o 00:03:53.483 LINK bdevio 00:03:53.483 CC test/blobfs/mkfs/mkfs.o 00:03:53.483 CC examples/idxd/perf/perf.o 00:03:53.741 LINK mkfs 00:03:53.741 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:53.998 LINK idxd_perf 00:03:53.998 CC app/spdk_nvme_perf/perf.o 00:03:53.998 CC app/spdk_lspci/spdk_lspci.o 00:03:53.998 LINK interrupt_tgt 00:03:53.998 LINK spdk_lspci 00:03:54.256 CC app/spdk_nvme_identify/identify.o 00:03:54.515 CC app/spdk_nvme_discover/discovery_aer.o 00:03:54.515 LINK iscsi_fuzz 00:03:54.773 LINK spdk_nvme_discover 00:03:55.031 LINK spdk_nvme_perf 00:03:55.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:55.290 LINK spdk_nvme_identify 00:03:55.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:55.290 CC app/spdk_top/spdk_top.o 00:03:55.548 CC app/vhost/vhost.o 00:03:55.806 LINK vhost 00:03:55.806 LINK vhost_fuzz 00:03:55.806 CC app/spdk_dd/spdk_dd.o 00:03:55.806 TEST_HEADER include/spdk/accel.h 00:03:55.806 TEST_HEADER include/spdk/accel_module.h 00:03:55.806 TEST_HEADER include/spdk/assert.h 00:03:55.806 TEST_HEADER include/spdk/barrier.h 00:03:55.806 TEST_HEADER include/spdk/base64.h 00:03:56.065 TEST_HEADER include/spdk/bdev.h 00:03:56.065 TEST_HEADER include/spdk/bdev_module.h 00:03:56.065 TEST_HEADER include/spdk/bdev_zone.h 00:03:56.065 TEST_HEADER include/spdk/bit_array.h 00:03:56.065 TEST_HEADER include/spdk/bit_pool.h 00:03:56.065 TEST_HEADER include/spdk/blob.h 00:03:56.065 TEST_HEADER include/spdk/blob_bdev.h 00:03:56.065 TEST_HEADER include/spdk/blobfs.h 00:03:56.065 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:56.065 TEST_HEADER include/spdk/conf.h 00:03:56.065 TEST_HEADER include/spdk/config.h 00:03:56.065 TEST_HEADER include/spdk/cpuset.h 00:03:56.065 TEST_HEADER include/spdk/crc16.h 00:03:56.065 TEST_HEADER include/spdk/crc32.h 00:03:56.065 TEST_HEADER include/spdk/crc64.h 00:03:56.065 TEST_HEADER include/spdk/dif.h 00:03:56.065 TEST_HEADER include/spdk/dma.h 00:03:56.065 TEST_HEADER include/spdk/endian.h 00:03:56.065 TEST_HEADER include/spdk/env.h 00:03:56.065 TEST_HEADER include/spdk/env_dpdk.h 00:03:56.065 TEST_HEADER include/spdk/event.h 00:03:56.065 TEST_HEADER include/spdk/fd.h 00:03:56.065 TEST_HEADER include/spdk/fd_group.h 00:03:56.065 TEST_HEADER include/spdk/file.h 00:03:56.065 TEST_HEADER include/spdk/ftl.h 00:03:56.065 TEST_HEADER include/spdk/gpt_spec.h 00:03:56.065 TEST_HEADER include/spdk/hexlify.h 00:03:56.065 TEST_HEADER include/spdk/histogram_data.h 00:03:56.065 TEST_HEADER include/spdk/idxd.h 00:03:56.065 TEST_HEADER include/spdk/idxd_spec.h 00:03:56.065 TEST_HEADER include/spdk/init.h 00:03:56.065 TEST_HEADER include/spdk/ioat.h 00:03:56.065 TEST_HEADER include/spdk/ioat_spec.h 00:03:56.065 TEST_HEADER include/spdk/iscsi_spec.h 00:03:56.065 TEST_HEADER include/spdk/json.h 00:03:56.065 TEST_HEADER include/spdk/jsonrpc.h 00:03:56.065 TEST_HEADER include/spdk/likely.h 00:03:56.065 TEST_HEADER include/spdk/log.h 00:03:56.065 TEST_HEADER include/spdk/lvol.h 00:03:56.065 TEST_HEADER include/spdk/memory.h 00:03:56.065 TEST_HEADER include/spdk/mmio.h 00:03:56.065 CC test/dma/test_dma/test_dma.o 00:03:56.065 TEST_HEADER include/spdk/nbd.h 00:03:56.065 TEST_HEADER include/spdk/notify.h 00:03:56.065 TEST_HEADER include/spdk/nvme.h 00:03:56.065 TEST_HEADER include/spdk/nvme_intel.h 00:03:56.065 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:56.065 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:56.065 TEST_HEADER include/spdk/nvme_spec.h 00:03:56.065 TEST_HEADER include/spdk/nvme_zns.h 00:03:56.065 TEST_HEADER include/spdk/nvmf.h 00:03:56.065 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:56.065 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:56.065 TEST_HEADER include/spdk/nvmf_spec.h 00:03:56.065 TEST_HEADER include/spdk/nvmf_transport.h 00:03:56.065 TEST_HEADER include/spdk/opal.h 00:03:56.065 TEST_HEADER include/spdk/opal_spec.h 00:03:56.065 TEST_HEADER include/spdk/pci_ids.h 00:03:56.065 TEST_HEADER include/spdk/pipe.h 00:03:56.065 TEST_HEADER include/spdk/queue.h 00:03:56.065 TEST_HEADER include/spdk/reduce.h 00:03:56.065 TEST_HEADER include/spdk/rpc.h 00:03:56.065 TEST_HEADER include/spdk/scheduler.h 00:03:56.065 TEST_HEADER include/spdk/scsi.h 00:03:56.065 TEST_HEADER include/spdk/scsi_spec.h 00:03:56.065 TEST_HEADER include/spdk/sock.h 00:03:56.065 TEST_HEADER include/spdk/stdinc.h 00:03:56.065 TEST_HEADER include/spdk/string.h 00:03:56.065 TEST_HEADER include/spdk/thread.h 00:03:56.065 TEST_HEADER include/spdk/trace.h 00:03:56.065 TEST_HEADER include/spdk/trace_parser.h 00:03:56.065 TEST_HEADER include/spdk/tree.h 00:03:56.065 TEST_HEADER include/spdk/ublk.h 00:03:56.065 TEST_HEADER include/spdk/util.h 00:03:56.065 TEST_HEADER include/spdk/uuid.h 00:03:56.065 TEST_HEADER include/spdk/version.h 00:03:56.065 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:56.065 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:56.065 TEST_HEADER include/spdk/vhost.h 00:03:56.065 TEST_HEADER include/spdk/vmd.h 00:03:56.065 TEST_HEADER include/spdk/xor.h 00:03:56.065 TEST_HEADER include/spdk/zipf.h 00:03:56.065 CXX test/cpp_headers/accel.o 00:03:56.324 CC app/fio/nvme/fio_plugin.o 00:03:56.324 LINK spdk_dd 00:03:56.324 CXX test/cpp_headers/accel_module.o 00:03:56.324 LINK spdk_top 00:03:56.324 CC test/env/vtophys/vtophys.o 00:03:56.324 LINK test_dma 00:03:56.324 CXX test/cpp_headers/assert.o 00:03:56.324 CC test/env/mem_callbacks/mem_callbacks.o 00:03:56.582 LINK vtophys 00:03:56.582 CXX test/cpp_headers/barrier.o 00:03:56.582 LINK mem_callbacks 00:03:56.582 CC test/event/event_perf/event_perf.o 00:03:56.841 CXX test/cpp_headers/base64.o 00:03:56.841 CXX test/cpp_headers/bdev.o 00:03:56.841 LINK event_perf 00:03:56.841 LINK spdk_nvme 00:03:57.099 CC test/lvol/esnap/esnap.o 00:03:57.099 CXX test/cpp_headers/bdev_module.o 00:03:57.099 CC test/nvme/aer/aer.o 00:03:57.099 CC test/rpc_client/rpc_client_test.o 00:03:57.099 CXX test/cpp_headers/bdev_zone.o 00:03:57.358 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.358 LINK rpc_client_test 00:03:57.358 LINK aer 00:03:57.358 CXX test/cpp_headers/bit_array.o 00:03:57.616 CC test/event/reactor/reactor.o 00:03:57.874 CC test/thread/poller_perf/poller_perf.o 00:03:57.874 CXX test/cpp_headers/bit_pool.o 00:03:57.874 LINK env_dpdk_post_init 00:03:57.874 LINK reactor 00:03:57.874 CC app/fio/bdev/fio_plugin.o 00:03:58.133 LINK poller_perf 00:03:58.133 CXX test/cpp_headers/blob.o 00:03:58.391 CXX test/cpp_headers/blob_bdev.o 00:03:58.391 CC test/nvme/reset/reset.o 00:03:58.391 LINK spdk_bdev 00:03:58.391 CXX test/cpp_headers/blobfs.o 00:03:58.650 LINK reset 00:03:58.650 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.650 CC test/event/reactor_perf/reactor_perf.o 00:03:58.908 CC test/thread/lock/spdk_lock.o 00:03:58.908 CXX test/cpp_headers/conf.o 00:03:58.908 LINK reactor_perf 00:03:58.908 CC test/event/app_repeat/app_repeat.o 00:03:58.908 CXX test/cpp_headers/config.o 00:03:58.908 CC test/env/memory/memory_ut.o 00:03:58.908 CXX test/cpp_headers/cpuset.o 00:03:59.167 LINK app_repeat 00:03:59.167 CXX test/cpp_headers/crc16.o 00:03:59.167 CXX test/cpp_headers/crc32.o 00:03:59.167 CC test/env/pci/pci_ut.o 00:03:59.426 CXX test/cpp_headers/crc64.o 00:03:59.426 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:59.426 LINK memory_ut 00:03:59.426 CXX test/cpp_headers/dif.o 00:03:59.684 LINK histogram_ut 00:03:59.684 LINK pci_ut 00:03:59.684 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:59.684 CC test/nvme/sgl/sgl.o 00:03:59.684 CXX test/cpp_headers/dma.o 00:03:59.684 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:59.684 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:59.942 CXX test/cpp_headers/endian.o 00:03:59.942 LINK sgl 00:03:59.942 CXX test/cpp_headers/env.o 00:03:59.942 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:04:00.201 CC test/event/scheduler/scheduler.o 00:04:00.201 CXX test/cpp_headers/env_dpdk.o 00:04:00.201 LINK scsi_nvme_ut 00:04:00.459 CXX test/cpp_headers/event.o 00:04:00.459 LINK scheduler 00:04:00.459 CXX test/cpp_headers/fd.o 00:04:00.459 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:04:00.459 LINK spdk_lock 00:04:00.717 CXX test/cpp_headers/fd_group.o 00:04:00.975 CXX test/cpp_headers/file.o 00:04:00.975 LINK gpt_ut 00:04:00.975 CC test/nvme/e2edp/nvme_dp.o 00:04:00.975 CXX test/cpp_headers/ftl.o 00:04:01.233 CC test/nvme/overhead/overhead.o 00:04:01.233 CXX test/cpp_headers/gpt_spec.o 00:04:01.233 CXX test/cpp_headers/hexlify.o 00:04:01.233 CXX test/cpp_headers/histogram_data.o 00:04:01.233 LINK nvme_dp 00:04:01.492 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:04:01.492 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:04:01.492 CXX test/cpp_headers/idxd.o 00:04:01.492 LINK overhead 00:04:01.750 LINK tree_ut 00:04:01.750 CXX test/cpp_headers/idxd_spec.o 00:04:01.750 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:04:02.010 CXX test/cpp_headers/init.o 00:04:02.010 LINK blob_bdev_ut 00:04:02.010 CXX test/cpp_headers/ioat.o 00:04:02.010 LINK accel_ut 00:04:02.268 CXX test/cpp_headers/ioat_spec.o 00:04:02.268 CXX test/cpp_headers/iscsi_spec.o 00:04:02.526 CC test/unit/lib/blob/blob.c/blob_ut.o 00:04:02.526 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:04:02.526 CC test/nvme/err_injection/err_injection.o 00:04:02.526 CC test/unit/lib/dma/dma.c/dma_ut.o 00:04:02.526 CXX test/cpp_headers/json.o 00:04:02.784 LINK esnap 00:04:02.784 LINK err_injection 00:04:03.071 CXX test/cpp_headers/jsonrpc.o 00:04:03.071 LINK dma_ut 00:04:03.071 CC test/nvme/startup/startup.o 00:04:03.330 CC test/nvme/reserve/reserve.o 00:04:03.330 CC test/nvme/simple_copy/simple_copy.o 00:04:03.589 LINK startup 00:04:03.589 CXX test/cpp_headers/likely.o 00:04:03.589 LINK reserve 00:04:03.589 LINK simple_copy 00:04:03.847 CXX test/cpp_headers/log.o 00:04:03.847 LINK blobfs_async_ut 00:04:03.847 CC test/unit/lib/event/app.c/app_ut.o 00:04:04.105 LINK blobfs_sync_ut 00:04:04.105 CXX test/cpp_headers/lvol.o 00:04:04.105 LINK part_ut 00:04:04.364 CXX test/cpp_headers/memory.o 00:04:04.364 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:04:04.364 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:04:04.364 CXX test/cpp_headers/mmio.o 00:04:04.622 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:04:04.622 LINK blobfs_bdev_ut 00:04:04.622 CXX test/cpp_headers/nbd.o 00:04:04.622 LINK app_ut 00:04:04.622 CXX test/cpp_headers/notify.o 00:04:04.880 CXX test/cpp_headers/nvme.o 00:04:04.880 CXX test/cpp_headers/nvme_intel.o 00:04:04.880 CXX test/cpp_headers/nvme_ocssd.o 00:04:04.880 CC test/nvme/connect_stress/connect_stress.o 00:04:04.880 CC test/nvme/boot_partition/boot_partition.o 00:04:04.880 LINK ioat_ut 00:04:05.137 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:05.137 CC test/nvme/fused_ordering/fused_ordering.o 00:04:05.137 CC test/nvme/compliance/nvme_compliance.o 00:04:05.137 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:05.137 LINK boot_partition 00:04:05.137 LINK connect_stress 00:04:05.137 CXX test/cpp_headers/nvme_spec.o 00:04:05.137 LINK doorbell_aers 00:04:05.137 LINK fused_ordering 00:04:05.137 CC test/nvme/fdp/fdp.o 00:04:05.137 LINK reactor_ut 00:04:05.396 LINK bdev_ut 00:04:05.396 CXX test/cpp_headers/nvme_zns.o 00:04:05.396 LINK nvme_compliance 00:04:05.654 CXX test/cpp_headers/nvmf.o 00:04:05.654 LINK fdp 00:04:05.654 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:04:05.654 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:04:05.654 CXX test/cpp_headers/nvmf_cmd.o 00:04:05.912 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:06.170 CXX test/cpp_headers/nvmf_spec.o 00:04:06.170 CXX test/cpp_headers/nvmf_transport.o 00:04:06.170 CXX test/cpp_headers/opal.o 00:04:06.170 CXX test/cpp_headers/opal_spec.o 00:04:06.170 CXX test/cpp_headers/pci_ids.o 00:04:06.429 CXX test/cpp_headers/pipe.o 00:04:06.429 CXX test/cpp_headers/queue.o 00:04:06.429 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:04:06.429 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:04:06.429 CC test/unit/lib/iscsi/param.c/param_ut.o 00:04:06.429 CXX test/cpp_headers/reduce.o 00:04:06.429 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:04:06.429 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:04:06.687 CC test/nvme/cuse/cuse.o 00:04:06.687 CXX test/cpp_headers/rpc.o 00:04:06.687 LINK conn_ut 00:04:06.687 LINK vbdev_lvol_ut 00:04:06.945 LINK init_grp_ut 00:04:06.945 CXX test/cpp_headers/scheduler.o 00:04:06.945 LINK param_ut 00:04:06.945 CXX test/cpp_headers/scsi.o 00:04:06.945 LINK jsonrpc_server_ut 00:04:06.945 CXX test/cpp_headers/scsi_spec.o 00:04:07.203 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:04:07.203 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:04:07.203 CXX test/cpp_headers/sock.o 00:04:07.203 CXX test/cpp_headers/stdinc.o 00:04:07.203 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:04:07.203 CC test/unit/lib/log/log.c/log_ut.o 00:04:07.462 CXX test/cpp_headers/string.o 00:04:07.462 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:04:07.462 LINK cuse 00:04:07.462 LINK log_ut 00:04:07.721 CXX test/cpp_headers/thread.o 00:04:07.721 LINK portal_grp_ut 00:04:07.721 CXX test/cpp_headers/trace.o 00:04:07.721 CXX test/cpp_headers/trace_parser.o 00:04:07.980 CC test/unit/lib/notify/notify.c/notify_ut.o 00:04:07.980 CXX test/cpp_headers/tree.o 00:04:07.980 CXX test/cpp_headers/ublk.o 00:04:07.980 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:04:07.980 LINK tgt_node_ut 00:04:07.980 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:04:08.238 CXX test/cpp_headers/util.o 00:04:08.238 LINK notify_ut 00:04:08.238 CXX test/cpp_headers/uuid.o 00:04:08.238 CXX test/cpp_headers/version.o 00:04:08.496 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:04:08.497 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:04:08.497 CXX test/cpp_headers/vfio_user_pci.o 00:04:08.755 CXX test/cpp_headers/vfio_user_spec.o 00:04:08.755 CXX test/cpp_headers/vhost.o 00:04:09.013 CXX test/cpp_headers/vmd.o 00:04:09.013 LINK iscsi_ut 00:04:09.013 CXX test/cpp_headers/xor.o 00:04:09.271 LINK lvol_ut 00:04:09.271 LINK nvme_ut 00:04:09.271 CXX test/cpp_headers/zipf.o 00:04:09.271 LINK json_parse_ut 00:04:09.529 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:09.529 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:04:09.529 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:04:09.529 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:04:09.529 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:10.119 LINK dev_ut 00:04:10.119 LINK json_util_ut 00:04:10.119 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:04:10.378 LINK blob_ut 00:04:10.378 LINK json_write_ut 00:04:10.378 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:04:10.636 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:04:10.636 LINK scsi_ut 00:04:10.636 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:10.636 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:04:10.636 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:10.895 LINK nvme_ctrlr_cmd_ut 00:04:10.895 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:10.895 LINK lun_ut 00:04:11.153 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:04:11.153 LINK bdev_ut 00:04:11.153 LINK scsi_pr_ut 00:04:11.153 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:04:11.153 LINK nvme_ctrlr_ut 00:04:11.411 LINK ctrlr_ut 00:04:11.411 CC test/unit/lib/sock/sock.c/sock_ut.o 00:04:11.670 LINK scsi_bdev_ut 00:04:11.670 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:11.670 LINK nvme_ns_ut 00:04:11.670 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:11.670 LINK bdev_raid_sb_ut 00:04:11.933 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:11.933 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:11.933 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:12.192 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:12.450 LINK concat_ut 00:04:12.450 LINK raid1_ut 00:04:12.450 LINK base64_ut 00:04:12.450 LINK iobuf_ut 00:04:12.708 LINK tcp_ut 00:04:12.708 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:12.708 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:12.708 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:12.708 LINK nvme_ns_cmd_ut 00:04:12.708 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:04:12.966 LINK cpuset_ut 00:04:12.966 LINK sock_ut 00:04:12.966 LINK raid5f_ut 00:04:12.966 LINK pci_event_ut 00:04:13.225 LINK bdev_raid_ut 00:04:13.225 LINK bit_array_ut 00:04:13.225 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:13.225 LINK bdev_zone_ut 00:04:13.225 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:13.225 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:13.483 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:13.483 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:13.483 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:13.483 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:13.483 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:13.483 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:13.483 LINK crc16_ut 00:04:13.483 LINK crc32_ieee_ut 00:04:13.740 LINK crc32c_ut 00:04:13.740 LINK thread_ut 00:04:13.740 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:13.740 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:13.998 LINK crc64_ut 00:04:13.998 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:13.998 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:14.255 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:14.255 LINK vbdev_zone_block_ut 00:04:14.513 LINK iov_ut 00:04:14.513 LINK posix_ut 00:04:14.513 LINK ctrlr_bdev_ut 00:04:14.513 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:14.771 LINK nvme_ns_ocssd_cmd_ut 00:04:14.771 CC test/unit/lib/util/math.c/math_ut.o 00:04:14.771 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:14.771 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:14.771 LINK math_ut 00:04:14.771 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:15.029 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:15.287 LINK ctrlr_discovery_ut 00:04:15.287 LINK dif_ut 00:04:15.287 LINK nvme_quirks_ut 00:04:15.287 LINK pipe_ut 00:04:15.545 LINK subsystem_ut 00:04:15.545 LINK nvme_poll_group_ut 00:04:15.545 LINK subsystem_ut 00:04:15.545 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:15.545 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:15.545 CC test/unit/lib/util/string.c/string_ut.o 00:04:15.545 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:15.803 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:15.803 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:15.803 LINK nvme_pcie_ut 00:04:15.803 LINK string_ut 00:04:15.803 LINK nvme_qpair_ut 00:04:16.062 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:16.062 LINK rpc_ut 00:04:16.062 LINK xor_ut 00:04:16.062 LINK idxd_user_ut 00:04:16.062 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:16.062 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:16.062 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:16.062 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:16.319 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:16.319 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:16.577 LINK ftl_l2p_ut 00:04:16.577 LINK common_ut 00:04:16.577 LINK idxd_ut 00:04:16.577 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:16.841 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:16.841 LINK nvme_io_msg_ut 00:04:16.841 LINK nvmf_ut 00:04:16.841 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:17.103 LINK nvme_transport_ut 00:04:17.103 LINK ftl_bitmap_ut 00:04:17.103 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:17.362 LINK ftl_io_ut 00:04:17.362 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:17.362 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:17.362 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:17.362 LINK ftl_mempool_ut 00:04:17.362 LINK ftl_band_ut 00:04:17.620 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:17.620 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:17.620 LINK ftl_mngt_ut 00:04:17.878 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:17.878 LINK vhost_ut 00:04:17.878 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:18.136 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:18.394 LINK bdev_nvme_ut 00:04:18.652 LINK nvme_fabric_ut 00:04:18.652 LINK ftl_layout_upgrade_ut 00:04:18.652 LINK nvme_tcp_ut 00:04:18.652 LINK ftl_sb_ut 00:04:18.652 LINK nvme_opal_ut 00:04:18.910 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:19.168 LINK nvme_pcie_common_ut 00:04:20.540 LINK nvme_rdma_ut 00:04:20.540 LINK nvme_cuse_ut 00:04:20.540 LINK transport_ut 00:04:21.103 LINK rdma_ut 00:04:21.361 00:04:21.361 real 1m37.950s 00:04:21.361 user 8m37.445s 00:04:21.361 sys 1m35.014s 00:04:21.361 10:29:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:21.361 10:29:47 -- common/autotest_common.sh@10 -- $ set +x 00:04:21.361 ************************************ 00:04:21.361 END TEST unittest_build 00:04:21.361 ************************************ 00:04:21.620 10:29:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:21.620 10:29:48 -- nvmf/common.sh@7 -- # uname -s 00:04:21.620 10:29:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.620 10:29:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.620 10:29:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.620 10:29:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.620 10:29:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.620 10:29:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.620 10:29:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.620 10:29:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.620 10:29:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.620 10:29:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.620 10:29:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3ecea187-2b56-4295-8285-62a67ecca762 00:04:21.620 10:29:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=3ecea187-2b56-4295-8285-62a67ecca762 00:04:21.620 10:29:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.620 10:29:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.620 10:29:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.620 10:29:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:21.620 10:29:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.620 10:29:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.620 10:29:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.620 10:29:48 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.620 10:29:48 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.620 10:29:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.620 10:29:48 -- paths/export.sh@5 -- # export PATH 00:04:21.621 10:29:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:21.621 10:29:48 -- nvmf/common.sh@46 -- # : 0 00:04:21.621 10:29:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:21.621 10:29:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:21.621 10:29:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:21.621 10:29:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.621 10:29:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.621 10:29:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:21.621 10:29:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:21.621 10:29:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:21.621 10:29:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:21.621 10:29:48 -- spdk/autotest.sh@32 -- # uname -s 00:04:21.621 10:29:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:21.621 10:29:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:21.621 10:29:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.621 10:29:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:21.621 10:29:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:21.621 10:29:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:21.621 10:29:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:21.621 10:29:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:21.621 10:29:48 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:21.621 10:29:48 -- spdk/autotest.sh@48 -- # udevadm_pid=103990 00:04:21.621 10:29:48 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:21.621 10:29:48 -- spdk/autotest.sh@54 -- # echo 104006 00:04:21.621 10:29:48 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:21.621 10:29:48 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:21.621 10:29:48 -- spdk/autotest.sh@56 -- # echo 104010 00:04:21.621 10:29:48 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:21.621 10:29:48 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:21.621 10:29:48 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:21.621 10:29:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:21.621 10:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:21.621 10:29:48 -- spdk/autotest.sh@70 -- # create_test_list 00:04:21.621 10:29:48 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:21.621 10:29:48 -- common/autotest_common.sh@10 -- # set +x 00:04:21.621 10:29:48 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:21.621 10:29:48 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:21.621 10:29:48 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:21.621 10:29:48 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:21.621 10:29:48 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:21.621 10:29:48 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:21.621 10:29:48 -- common/autotest_common.sh@1440 -- # uname 00:04:21.621 10:29:48 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:21.621 10:29:48 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:21.621 10:29:48 -- common/autotest_common.sh@1460 -- # uname 00:04:21.621 10:29:48 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:21.621 10:29:48 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:21.621 10:29:48 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:21.621 10:29:48 -- spdk/autotest.sh@83 -- # hash lcov 00:04:21.621 10:29:48 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:21.621 10:29:48 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:21.621 --rc lcov_branch_coverage=1 00:04:21.621 --rc lcov_function_coverage=1 00:04:21.621 --rc genhtml_branch_coverage=1 00:04:21.621 --rc genhtml_function_coverage=1 00:04:21.621 --rc genhtml_legend=1 00:04:21.621 --rc geninfo_all_blocks=1 00:04:21.621 ' 00:04:21.621 10:29:48 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:21.621 --rc lcov_branch_coverage=1 00:04:21.621 --rc lcov_function_coverage=1 00:04:21.621 --rc genhtml_branch_coverage=1 00:04:21.621 --rc genhtml_function_coverage=1 00:04:21.621 --rc genhtml_legend=1 00:04:21.621 --rc geninfo_all_blocks=1 00:04:21.621 ' 00:04:21.621 10:29:48 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:21.621 --rc lcov_branch_coverage=1 00:04:21.621 --rc lcov_function_coverage=1 00:04:21.621 --rc genhtml_branch_coverage=1 00:04:21.621 --rc genhtml_function_coverage=1 00:04:21.621 --rc genhtml_legend=1 00:04:21.621 --rc geninfo_all_blocks=1 00:04:21.621 --no-external' 00:04:21.621 10:29:48 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:21.621 --rc lcov_branch_coverage=1 00:04:21.621 --rc lcov_function_coverage=1 00:04:21.621 --rc genhtml_branch_coverage=1 00:04:21.621 --rc genhtml_function_coverage=1 00:04:21.621 --rc genhtml_legend=1 00:04:21.621 --rc geninfo_all_blocks=1 00:04:21.621 --no-external' 00:04:21.621 10:29:48 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:21.879 lcov: LCOV version 1.15 00:04:21.879 10:29:48 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:39.951 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:39.951 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:39.951 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:39.951 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:39.951 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:39.952 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:12.041 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:12.041 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:12.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:12.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:12.042 10:30:38 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:12.042 10:30:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:12.042 10:30:38 -- common/autotest_common.sh@10 -- # set +x 00:05:12.042 10:30:38 -- spdk/autotest.sh@102 -- # rm -f 00:05:12.042 10:30:38 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:12.609 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:12.609 10:30:39 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:12.609 10:30:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:12.609 10:30:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:12.609 10:30:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:12.609 10:30:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:12.609 10:30:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:12.609 10:30:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:12.609 10:30:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:12.609 10:30:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:12.609 10:30:39 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:12.609 10:30:39 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:05:12.609 10:30:39 -- spdk/autotest.sh@121 -- # grep -v p 00:05:12.609 10:30:39 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:12.609 10:30:39 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:12.609 10:30:39 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:12.609 10:30:39 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:12.609 10:30:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:12.609 No valid GPT data, bailing 00:05:12.609 10:30:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:12.609 10:30:39 -- scripts/common.sh@393 -- # pt= 00:05:12.609 10:30:39 -- scripts/common.sh@394 -- # return 1 00:05:12.609 10:30:39 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:12.609 1+0 records in 00:05:12.609 1+0 records out 00:05:12.609 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572372 s, 183 MB/s 00:05:12.609 10:30:39 -- spdk/autotest.sh@129 -- # sync 00:05:12.609 10:30:39 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:12.609 10:30:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:12.609 10:30:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.985 10:30:40 -- spdk/autotest.sh@135 -- # uname -s 00:05:13.985 10:30:40 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:13.985 10:30:40 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:13.985 10:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.985 10:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.985 10:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:13.985 ************************************ 00:05:13.985 START TEST setup.sh 00:05:13.985 ************************************ 00:05:13.985 10:30:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:14.243 * Looking for test storage... 00:05:14.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.243 10:30:40 -- setup/test-setup.sh@10 -- # uname -s 00:05:14.243 10:30:40 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:14.243 10:30:40 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:14.243 10:30:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.243 10:30:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.243 10:30:40 -- common/autotest_common.sh@10 -- # set +x 00:05:14.243 ************************************ 00:05:14.243 START TEST acl 00:05:14.243 ************************************ 00:05:14.243 10:30:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:14.243 * Looking for test storage... 00:05:14.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:14.243 10:30:40 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:14.243 10:30:40 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:14.243 10:30:40 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:14.243 10:30:40 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:14.243 10:30:40 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:14.243 10:30:40 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:14.243 10:30:40 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:14.243 10:30:40 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.243 10:30:40 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:14.243 10:30:40 -- setup/acl.sh@12 -- # devs=() 00:05:14.243 10:30:40 -- setup/acl.sh@12 -- # declare -a devs 00:05:14.243 10:30:40 -- setup/acl.sh@13 -- # drivers=() 00:05:14.243 10:30:40 -- setup/acl.sh@13 -- # declare -A drivers 00:05:14.243 10:30:40 -- setup/acl.sh@51 -- # setup reset 00:05:14.243 10:30:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.243 10:30:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.809 10:30:41 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:14.809 10:30:41 -- setup/acl.sh@16 -- # local dev driver 00:05:14.809 10:30:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.809 10:30:41 -- setup/acl.sh@15 -- # setup output status 00:05:14.809 10:30:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.809 10:30:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.809 Hugepages 00:05:14.809 node hugesize free / total 00:05:14.809 10:30:41 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:14.809 10:30:41 -- setup/acl.sh@19 -- # continue 00:05:14.809 10:30:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.809 00:05:14.809 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.809 10:30:41 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:14.809 10:30:41 -- setup/acl.sh@19 -- # continue 00:05:14.809 10:30:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.068 10:30:41 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:15.068 10:30:41 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:15.068 10:30:41 -- setup/acl.sh@20 -- # continue 00:05:15.068 10:30:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.068 10:30:41 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:15.068 10:30:41 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:15.068 10:30:41 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:15.068 10:30:41 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:15.068 10:30:41 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:15.068 10:30:41 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:15.068 10:30:41 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:15.068 10:30:41 -- setup/acl.sh@54 -- # run_test denied denied 00:05:15.068 10:30:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.068 10:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.068 10:30:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.068 ************************************ 00:05:15.068 START TEST denied 00:05:15.068 ************************************ 00:05:15.068 10:30:41 -- common/autotest_common.sh@1104 -- # denied 00:05:15.068 10:30:41 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:15.068 10:30:41 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:15.068 10:30:41 -- setup/acl.sh@38 -- # setup output config 00:05:15.068 10:30:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.068 10:30:41 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.445 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:16.445 10:30:43 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:16.445 10:30:43 -- setup/acl.sh@28 -- # local dev driver 00:05:16.445 10:30:43 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.445 10:30:43 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:16.445 10:30:43 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:16.445 10:30:43 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.445 10:30:43 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.445 10:30:43 -- setup/acl.sh@41 -- # setup reset 00:05:16.445 10:30:43 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.445 10:30:43 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.010 00:05:17.010 real 0m1.874s 00:05:17.010 user 0m0.518s 00:05:17.010 sys 0m1.408s 00:05:17.010 10:30:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.010 10:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.010 ************************************ 00:05:17.010 END TEST denied 00:05:17.010 ************************************ 00:05:17.010 10:30:43 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:17.010 10:30:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.010 10:30:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.010 10:30:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.010 ************************************ 00:05:17.010 START TEST allowed 00:05:17.010 ************************************ 00:05:17.010 10:30:43 -- common/autotest_common.sh@1104 -- # allowed 00:05:17.010 10:30:43 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:17.010 10:30:43 -- setup/acl.sh@45 -- # setup output config 00:05:17.010 10:30:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.010 10:30:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.010 10:30:43 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:18.383 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.383 10:30:45 -- setup/acl.sh@47 -- # verify 00:05:18.383 10:30:45 -- setup/acl.sh@28 -- # local dev driver 00:05:18.383 10:30:45 -- setup/acl.sh@48 -- # setup reset 00:05:18.383 10:30:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.383 10:30:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.950 ************************************ 00:05:18.950 END TEST allowed 00:05:18.950 ************************************ 00:05:18.950 00:05:18.950 real 0m1.929s 00:05:18.950 user 0m0.437s 00:05:18.950 sys 0m1.507s 00:05:18.950 10:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.950 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:18.950 00:05:18.950 real 0m4.778s 00:05:18.950 user 0m1.502s 00:05:18.950 sys 0m3.408s 00:05:18.950 10:30:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.950 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:18.950 ************************************ 00:05:18.950 END TEST acl 00:05:18.950 ************************************ 00:05:18.950 10:30:45 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:18.950 10:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.950 10:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.950 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:18.950 ************************************ 00:05:18.950 START TEST hugepages 00:05:18.950 ************************************ 00:05:18.950 10:30:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:19.209 * Looking for test storage... 00:05:19.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:19.210 10:30:45 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:19.210 10:30:45 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:19.210 10:30:45 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:19.210 10:30:45 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:19.210 10:30:45 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:19.210 10:30:45 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:19.210 10:30:45 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:19.210 10:30:45 -- setup/common.sh@18 -- # local node= 00:05:19.210 10:30:45 -- setup/common.sh@19 -- # local var val 00:05:19.210 10:30:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.210 10:30:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.210 10:30:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.210 10:30:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.210 10:30:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.210 10:30:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 2137352 kB' 'MemAvailable: 7403908 kB' 'Buffers: 40272 kB' 'Cached: 5325008 kB' 'SwapCached: 0 kB' 'Active: 1375348 kB' 'Inactive: 4106176 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 127084 kB' 'Active(file): 1374268 kB' 'Inactive(file): 3979092 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 145744 kB' 'Mapped: 68096 kB' 'Shmem: 2596 kB' 'KReclaimable: 234500 kB' 'Slab: 301568 kB' 'SReclaimable: 234500 kB' 'SUnreclaim: 67068 kB' 'KernelStack: 4364 kB' 'PageTables: 3716 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 491396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.210 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.210 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # continue 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.211 10:30:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.211 10:30:45 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:19.211 10:30:45 -- setup/common.sh@33 -- # echo 2048 00:05:19.211 10:30:45 -- setup/common.sh@33 -- # return 0 00:05:19.211 10:30:45 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:19.211 10:30:45 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:19.211 10:30:45 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:19.211 10:30:45 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:19.211 10:30:45 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:19.211 10:30:45 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:19.211 10:30:45 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:19.211 10:30:45 -- setup/hugepages.sh@207 -- # get_nodes 00:05:19.211 10:30:45 -- setup/hugepages.sh@27 -- # local node 00:05:19.211 10:30:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.211 10:30:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:19.211 10:30:45 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.211 10:30:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.211 10:30:45 -- setup/hugepages.sh@208 -- # clear_hp 00:05:19.211 10:30:45 -- setup/hugepages.sh@37 -- # local node hp 00:05:19.211 10:30:45 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:19.211 10:30:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.211 10:30:45 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.211 10:30:45 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:19.211 10:30:45 -- setup/hugepages.sh@41 -- # echo 0 00:05:19.211 10:30:45 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:19.211 10:30:45 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:19.211 10:30:45 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:19.211 10:30:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.211 10:30:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.211 10:30:45 -- common/autotest_common.sh@10 -- # set +x 00:05:19.211 ************************************ 00:05:19.211 START TEST default_setup 00:05:19.211 ************************************ 00:05:19.211 10:30:45 -- common/autotest_common.sh@1104 -- # default_setup 00:05:19.211 10:30:45 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:19.211 10:30:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.211 10:30:45 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.211 10:30:45 -- setup/hugepages.sh@51 -- # shift 00:05:19.211 10:30:45 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.211 10:30:45 -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.211 10:30:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.211 10:30:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.211 10:30:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.211 10:30:45 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.211 10:30:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.211 10:30:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.211 10:30:45 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.211 10:30:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.211 10:30:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.211 10:30:45 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.211 10:30:45 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.211 10:30:45 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:19.211 10:30:45 -- setup/hugepages.sh@73 -- # return 0 00:05:19.211 10:30:45 -- setup/hugepages.sh@137 -- # setup output 00:05:19.211 10:30:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.211 10:30:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:19.728 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:20.294 10:30:46 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:20.294 10:30:46 -- setup/hugepages.sh@89 -- # local node 00:05:20.294 10:30:46 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.294 10:30:46 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.294 10:30:46 -- setup/hugepages.sh@92 -- # local surp 00:05:20.294 10:30:46 -- setup/hugepages.sh@93 -- # local resv 00:05:20.294 10:30:46 -- setup/hugepages.sh@94 -- # local anon 00:05:20.294 10:30:46 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.294 10:30:46 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.294 10:30:46 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.294 10:30:46 -- setup/common.sh@18 -- # local node= 00:05:20.294 10:30:46 -- setup/common.sh@19 -- # local var val 00:05:20.294 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.294 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.294 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.294 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.294 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.294 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4219480 kB' 'MemAvailable: 9486176 kB' 'Buffers: 40272 kB' 'Cached: 5324848 kB' 'SwapCached: 0 kB' 'Active: 1375436 kB' 'Inactive: 4121604 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142508 kB' 'Active(file): 1374360 kB' 'Inactive(file): 3979096 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161260 kB' 'Mapped: 67928 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301936 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67392 kB' 'KernelStack: 4336 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.294 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.294 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.295 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.295 10:30:46 -- setup/common.sh@33 -- # echo 0 00:05:20.295 10:30:46 -- setup/common.sh@33 -- # return 0 00:05:20.295 10:30:46 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.295 10:30:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.295 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.295 10:30:46 -- setup/common.sh@18 -- # local node= 00:05:20.295 10:30:46 -- setup/common.sh@19 -- # local var val 00:05:20.295 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.295 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.295 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.295 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.295 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.295 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.295 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4219480 kB' 'MemAvailable: 9486176 kB' 'Buffers: 40272 kB' 'Cached: 5324848 kB' 'SwapCached: 0 kB' 'Active: 1375436 kB' 'Inactive: 4121404 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142308 kB' 'Active(file): 1374360 kB' 'Inactive(file): 3979096 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161072 kB' 'Mapped: 67928 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301936 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67392 kB' 'KernelStack: 4336 kB' 'PageTables: 3584 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.296 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.296 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.297 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.297 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.557 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.557 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.557 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.557 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.557 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.557 10:30:46 -- setup/common.sh@33 -- # echo 0 00:05:20.557 10:30:46 -- setup/common.sh@33 -- # return 0 00:05:20.557 10:30:46 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.557 10:30:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.557 10:30:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.557 10:30:46 -- setup/common.sh@18 -- # local node= 00:05:20.557 10:30:46 -- setup/common.sh@19 -- # local var val 00:05:20.557 10:30:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.557 10:30:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.557 10:30:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.557 10:30:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.557 10:30:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.557 10:30:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.557 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.557 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.557 10:30:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4219732 kB' 'MemAvailable: 9486428 kB' 'Buffers: 40272 kB' 'Cached: 5324848 kB' 'SwapCached: 0 kB' 'Active: 1375436 kB' 'Inactive: 4121892 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142796 kB' 'Active(file): 1374360 kB' 'Inactive(file): 3979096 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161256 kB' 'Mapped: 67928 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301936 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67392 kB' 'KernelStack: 4320 kB' 'PageTables: 3540 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:20.557 10:30:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.557 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.558 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.558 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.559 10:30:46 -- setup/common.sh@33 -- # echo 0 00:05:20.559 10:30:46 -- setup/common.sh@33 -- # return 0 00:05:20.559 10:30:47 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.559 10:30:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.559 nr_hugepages=1024 00:05:20.559 resv_hugepages=0 00:05:20.559 10:30:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.559 surplus_hugepages=0 00:05:20.559 10:30:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.559 anon_hugepages=0 00:05:20.559 10:30:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.559 10:30:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.559 10:30:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.559 10:30:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.559 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.559 10:30:47 -- setup/common.sh@18 -- # local node= 00:05:20.559 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:20.559 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.559 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.559 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.559 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.559 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.559 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4220256 kB' 'MemAvailable: 9486956 kB' 'Buffers: 40272 kB' 'Cached: 5324848 kB' 'SwapCached: 0 kB' 'Active: 1375436 kB' 'Inactive: 4121436 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142336 kB' 'Active(file): 1374360 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'AnonPages: 161008 kB' 'Mapped: 67928 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301936 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67392 kB' 'KernelStack: 4356 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 505756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.559 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.559 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.560 10:30:47 -- setup/common.sh@33 -- # echo 1024 00:05:20.560 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:20.560 10:30:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.560 10:30:47 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.560 10:30:47 -- setup/hugepages.sh@27 -- # local node 00:05:20.560 10:30:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.560 10:30:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.560 10:30:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.560 10:30:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.560 10:30:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.560 10:30:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.560 10:30:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.560 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.560 10:30:47 -- setup/common.sh@18 -- # local node=0 00:05:20.560 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:20.560 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.560 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.560 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.560 10:30:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.560 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.560 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4220256 kB' 'MemUsed: 8022720 kB' 'SwapCached: 0 kB' 'Active: 1375436 kB' 'Inactive: 4121956 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142856 kB' 'Active(file): 1374360 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 400 kB' 'Writeback: 0 kB' 'FilePages: 5365120 kB' 'Mapped: 67928 kB' 'AnonPages: 161268 kB' 'Shmem: 2596 kB' 'KernelStack: 4356 kB' 'PageTables: 3716 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301936 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.560 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.560 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # continue 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.561 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.561 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.561 10:30:47 -- setup/common.sh@33 -- # echo 0 00:05:20.561 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:20.561 10:30:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.561 10:30:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.561 10:30:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.561 10:30:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.561 node0=1024 expecting 1024 00:05:20.561 10:30:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.561 10:30:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.561 00:05:20.561 real 0m1.347s 00:05:20.561 user 0m0.360s 00:05:20.561 sys 0m0.982s 00:05:20.561 10:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.561 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:20.561 ************************************ 00:05:20.561 END TEST default_setup 00:05:20.561 ************************************ 00:05:20.561 10:30:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:20.561 10:30:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.561 10:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.561 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:20.561 ************************************ 00:05:20.561 START TEST per_node_1G_alloc 00:05:20.561 ************************************ 00:05:20.561 10:30:47 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:20.561 10:30:47 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:20.561 10:30:47 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:20.561 10:30:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.561 10:30:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:20.561 10:30:47 -- setup/hugepages.sh@51 -- # shift 00:05:20.561 10:30:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:20.561 10:30:47 -- setup/hugepages.sh@52 -- # local node_ids 00:05:20.561 10:30:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.561 10:30:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.561 10:30:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:20.561 10:30:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:20.561 10:30:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.561 10:30:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.561 10:30:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.561 10:30:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.561 10:30:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.561 10:30:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:20.561 10:30:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:20.561 10:30:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:20.561 10:30:47 -- setup/hugepages.sh@73 -- # return 0 00:05:20.561 10:30:47 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:20.561 10:30:47 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:20.561 10:30:47 -- setup/hugepages.sh@146 -- # setup output 00:05:20.561 10:30:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.561 10:30:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.820 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:20.820 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.079 10:30:47 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:21.079 10:30:47 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:21.079 10:30:47 -- setup/hugepages.sh@89 -- # local node 00:05:21.079 10:30:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.079 10:30:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.079 10:30:47 -- setup/hugepages.sh@92 -- # local surp 00:05:21.079 10:30:47 -- setup/hugepages.sh@93 -- # local resv 00:05:21.079 10:30:47 -- setup/hugepages.sh@94 -- # local anon 00:05:21.079 10:30:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.079 10:30:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.079 10:30:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.079 10:30:47 -- setup/common.sh@18 -- # local node= 00:05:21.079 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:21.079 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.079 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.079 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.079 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.079 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.079 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5263256 kB' 'MemAvailable: 10529960 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375440 kB' 'Inactive: 4122052 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142952 kB' 'Active(file): 1374364 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 161504 kB' 'Mapped: 67892 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301864 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67320 kB' 'KernelStack: 4388 kB' 'PageTables: 3688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.079 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.079 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.080 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.080 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.080 10:30:47 -- setup/common.sh@33 -- # echo 0 00:05:21.080 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:21.080 10:30:47 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.080 10:30:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.341 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.341 10:30:47 -- setup/common.sh@18 -- # local node= 00:05:21.341 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:21.341 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.341 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.341 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.341 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.341 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.341 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.341 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.341 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5263256 kB' 'MemAvailable: 10529960 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375432 kB' 'Inactive: 4121508 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142408 kB' 'Active(file): 1374364 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 160988 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301944 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67400 kB' 'KernelStack: 4304 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.342 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.342 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.343 10:30:47 -- setup/common.sh@33 -- # echo 0 00:05:21.343 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:21.343 10:30:47 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.343 10:30:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.343 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.343 10:30:47 -- setup/common.sh@18 -- # local node= 00:05:21.343 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:21.343 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.343 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.343 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.343 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.343 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.343 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5263256 kB' 'MemAvailable: 10529960 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375432 kB' 'Inactive: 4121508 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142408 kB' 'Active(file): 1374364 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 160988 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301944 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67400 kB' 'KernelStack: 4304 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.343 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.343 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.344 10:30:47 -- setup/common.sh@33 -- # echo 0 00:05:21.344 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:21.344 10:30:47 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.344 nr_hugepages=512 00:05:21.344 10:30:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:21.344 resv_hugepages=0 00:05:21.344 10:30:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.344 surplus_hugepages=0 00:05:21.344 10:30:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.344 anon_hugepages=0 00:05:21.344 10:30:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.344 10:30:47 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.344 10:30:47 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:21.344 10:30:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.344 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.344 10:30:47 -- setup/common.sh@18 -- # local node= 00:05:21.344 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:21.344 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.344 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.344 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.344 10:30:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.344 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.344 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5263256 kB' 'MemAvailable: 10529960 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375432 kB' 'Inactive: 4121508 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142408 kB' 'Active(file): 1374364 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 160988 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301944 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67400 kB' 'KernelStack: 4372 kB' 'PageTables: 3748 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19524 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.344 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.344 10:30:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.345 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.345 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.345 10:30:47 -- setup/common.sh@33 -- # echo 512 00:05:21.345 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:21.345 10:30:47 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.345 10:30:47 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.346 10:30:47 -- setup/hugepages.sh@27 -- # local node 00:05:21.346 10:30:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.346 10:30:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.346 10:30:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.346 10:30:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.346 10:30:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.346 10:30:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.346 10:30:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.346 10:30:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.346 10:30:47 -- setup/common.sh@18 -- # local node=0 00:05:21.346 10:30:47 -- setup/common.sh@19 -- # local var val 00:05:21.346 10:30:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.346 10:30:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.346 10:30:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.346 10:30:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.346 10:30:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.346 10:30:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5263256 kB' 'MemUsed: 6979720 kB' 'SwapCached: 0 kB' 'Active: 1375432 kB' 'Inactive: 4121728 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142628 kB' 'Active(file): 1374364 kB' 'Inactive(file): 3979100 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 5365124 kB' 'Mapped: 67936 kB' 'AnonPages: 161468 kB' 'Shmem: 2596 kB' 'KernelStack: 4424 kB' 'PageTables: 3708 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301944 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.346 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.346 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # continue 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.347 10:30:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.347 10:30:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.347 10:30:47 -- setup/common.sh@33 -- # echo 0 00:05:21.347 10:30:47 -- setup/common.sh@33 -- # return 0 00:05:21.347 10:30:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.347 10:30:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.347 10:30:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.347 node0=512 expecting 512 00:05:21.347 10:30:47 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.347 10:30:47 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:21.347 00:05:21.347 real 0m0.762s 00:05:21.347 user 0m0.292s 00:05:21.347 sys 0m0.507s 00:05:21.347 10:30:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.347 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.347 ************************************ 00:05:21.347 END TEST per_node_1G_alloc 00:05:21.347 ************************************ 00:05:21.347 10:30:47 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:21.347 10:30:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.347 10:30:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.347 10:30:47 -- common/autotest_common.sh@10 -- # set +x 00:05:21.347 ************************************ 00:05:21.347 START TEST even_2G_alloc 00:05:21.347 ************************************ 00:05:21.347 10:30:47 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:21.347 10:30:47 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:21.347 10:30:47 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.347 10:30:47 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.347 10:30:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.347 10:30:47 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.347 10:30:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.347 10:30:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.347 10:30:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.347 10:30:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.347 10:30:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.347 10:30:47 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:21.347 10:30:47 -- setup/hugepages.sh@83 -- # : 0 00:05:21.347 10:30:47 -- setup/hugepages.sh@84 -- # : 0 00:05:21.347 10:30:47 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.347 10:30:47 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:21.347 10:30:47 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:21.347 10:30:47 -- setup/hugepages.sh@153 -- # setup output 00:05:21.347 10:30:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.347 10:30:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:21.605 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.544 10:30:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:22.544 10:30:48 -- setup/hugepages.sh@89 -- # local node 00:05:22.544 10:30:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.544 10:30:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.544 10:30:48 -- setup/hugepages.sh@92 -- # local surp 00:05:22.544 10:30:48 -- setup/hugepages.sh@93 -- # local resv 00:05:22.544 10:30:48 -- setup/hugepages.sh@94 -- # local anon 00:05:22.544 10:30:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.544 10:30:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.544 10:30:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.544 10:30:48 -- setup/common.sh@18 -- # local node= 00:05:22.544 10:30:48 -- setup/common.sh@19 -- # local var val 00:05:22.544 10:30:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.544 10:30:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.544 10:30:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.544 10:30:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.544 10:30:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.544 10:30:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.544 10:30:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4216756 kB' 'MemAvailable: 9483460 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375452 kB' 'Inactive: 4121736 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142648 kB' 'Active(file): 1374376 kB' 'Inactive(file): 3979088 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 161260 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301780 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67236 kB' 'KernelStack: 4336 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.544 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.544 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.545 10:30:48 -- setup/common.sh@33 -- # echo 0 00:05:22.545 10:30:48 -- setup/common.sh@33 -- # return 0 00:05:22.545 10:30:48 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.545 10:30:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.545 10:30:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.545 10:30:48 -- setup/common.sh@18 -- # local node= 00:05:22.545 10:30:48 -- setup/common.sh@19 -- # local var val 00:05:22.545 10:30:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.545 10:30:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.545 10:30:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.545 10:30:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.545 10:30:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.545 10:30:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4216756 kB' 'MemAvailable: 9483460 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375452 kB' 'Inactive: 4121948 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 142860 kB' 'Active(file): 1374376 kB' 'Inactive(file): 3979088 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 161224 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301780 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67236 kB' 'KernelStack: 4352 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.545 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.545 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.546 10:30:48 -- setup/common.sh@33 -- # echo 0 00:05:22.546 10:30:48 -- setup/common.sh@33 -- # return 0 00:05:22.546 10:30:48 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.546 10:30:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.546 10:30:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.546 10:30:48 -- setup/common.sh@18 -- # local node= 00:05:22.546 10:30:48 -- setup/common.sh@19 -- # local var val 00:05:22.546 10:30:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.546 10:30:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.546 10:30:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.546 10:30:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.546 10:30:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.546 10:30:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4216504 kB' 'MemAvailable: 9483208 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375444 kB' 'Inactive: 4122076 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142988 kB' 'Active(file): 1374376 kB' 'Inactive(file): 3979088 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 161632 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301772 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67228 kB' 'KernelStack: 4384 kB' 'PageTables: 3688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.546 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.546 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:48 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.547 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.547 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.547 10:30:49 -- setup/common.sh@33 -- # echo 0 00:05:22.547 10:30:49 -- setup/common.sh@33 -- # return 0 00:05:22.547 nr_hugepages=1024 00:05:22.547 resv_hugepages=0 00:05:22.547 surplus_hugepages=0 00:05:22.547 anon_hugepages=0 00:05:22.547 10:30:49 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.547 10:30:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.547 10:30:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.547 10:30:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.547 10:30:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.547 10:30:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.547 10:30:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.547 10:30:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.547 10:30:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.547 10:30:49 -- setup/common.sh@18 -- # local node= 00:05:22.547 10:30:49 -- setup/common.sh@19 -- # local var val 00:05:22.547 10:30:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.547 10:30:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.547 10:30:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.547 10:30:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.547 10:30:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.548 10:30:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4216504 kB' 'MemAvailable: 9483208 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375444 kB' 'Inactive: 4121664 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 142576 kB' 'Active(file): 1374376 kB' 'Inactive(file): 3979088 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 161180 kB' 'Mapped: 67936 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301772 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67228 kB' 'KernelStack: 4352 kB' 'PageTables: 3612 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 506144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.548 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.548 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.549 10:30:49 -- setup/common.sh@33 -- # echo 1024 00:05:22.549 10:30:49 -- setup/common.sh@33 -- # return 0 00:05:22.549 10:30:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.549 10:30:49 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.549 10:30:49 -- setup/hugepages.sh@27 -- # local node 00:05:22.549 10:30:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.549 10:30:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.549 10:30:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.549 10:30:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.549 10:30:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.549 10:30:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.549 10:30:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.549 10:30:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.549 10:30:49 -- setup/common.sh@18 -- # local node=0 00:05:22.549 10:30:49 -- setup/common.sh@19 -- # local var val 00:05:22.549 10:30:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.549 10:30:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.549 10:30:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.549 10:30:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.549 10:30:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.549 10:30:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4216768 kB' 'MemUsed: 8026208 kB' 'SwapCached: 0 kB' 'Active: 1375444 kB' 'Inactive: 4122108 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 143020 kB' 'Active(file): 1374376 kB' 'Inactive(file): 3979088 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 5365124 kB' 'Mapped: 67936 kB' 'AnonPages: 161860 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3632 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301772 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.549 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.549 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # continue 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.550 10:30:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.550 10:30:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.550 10:30:49 -- setup/common.sh@33 -- # echo 0 00:05:22.550 10:30:49 -- setup/common.sh@33 -- # return 0 00:05:22.550 10:30:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.550 10:30:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.550 10:30:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.550 10:30:49 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.550 node0=1024 expecting 1024 00:05:22.550 10:30:49 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.550 00:05:22.550 real 0m1.198s 00:05:22.550 user 0m0.276s 00:05:22.550 sys 0m0.927s 00:05:22.550 10:30:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.550 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:22.550 ************************************ 00:05:22.550 END TEST even_2G_alloc 00:05:22.550 ************************************ 00:05:22.550 10:30:49 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:22.550 10:30:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.550 10:30:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.550 10:30:49 -- common/autotest_common.sh@10 -- # set +x 00:05:22.550 ************************************ 00:05:22.550 START TEST odd_alloc 00:05:22.550 ************************************ 00:05:22.550 10:30:49 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:22.550 10:30:49 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:22.550 10:30:49 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:22.550 10:30:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:22.550 10:30:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.550 10:30:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.550 10:30:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.550 10:30:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:22.550 10:30:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.550 10:30:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.550 10:30:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.550 10:30:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:22.550 10:30:49 -- setup/hugepages.sh@83 -- # : 0 00:05:22.550 10:30:49 -- setup/hugepages.sh@84 -- # : 0 00:05:22.550 10:30:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.550 10:30:49 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:22.550 10:30:49 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:22.550 10:30:49 -- setup/hugepages.sh@160 -- # setup output 00:05:22.550 10:30:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.550 10:30:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.809 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:22.809 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.746 10:30:50 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:23.746 10:30:50 -- setup/hugepages.sh@89 -- # local node 00:05:23.746 10:30:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.746 10:30:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.746 10:30:50 -- setup/hugepages.sh@92 -- # local surp 00:05:23.746 10:30:50 -- setup/hugepages.sh@93 -- # local resv 00:05:23.746 10:30:50 -- setup/hugepages.sh@94 -- # local anon 00:05:23.746 10:30:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.746 10:30:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.746 10:30:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.746 10:30:50 -- setup/common.sh@18 -- # local node= 00:05:23.746 10:30:50 -- setup/common.sh@19 -- # local var val 00:05:23.746 10:30:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.746 10:30:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.746 10:30:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.746 10:30:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.746 10:30:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.746 10:30:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.746 10:30:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4220360 kB' 'MemAvailable: 9487064 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375464 kB' 'Inactive: 4118396 kB' 'Active(anon): 1076 kB' 'Inactive(anon): 139320 kB' 'Active(file): 1374388 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 436 kB' 'Writeback: 0 kB' 'AnonPages: 158000 kB' 'Mapped: 67412 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301680 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67136 kB' 'KernelStack: 4356 kB' 'PageTables: 3648 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.746 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.746 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.747 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.747 10:30:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.748 10:30:50 -- setup/common.sh@33 -- # echo 0 00:05:23.748 10:30:50 -- setup/common.sh@33 -- # return 0 00:05:23.748 10:30:50 -- setup/hugepages.sh@97 -- # anon=0 00:05:23.748 10:30:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.748 10:30:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.748 10:30:50 -- setup/common.sh@18 -- # local node= 00:05:23.748 10:30:50 -- setup/common.sh@19 -- # local var val 00:05:23.748 10:30:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.748 10:30:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.748 10:30:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.748 10:30:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.748 10:30:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.748 10:30:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4220880 kB' 'MemAvailable: 9487584 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118644 kB' 'Active(anon): 1084 kB' 'Inactive(anon): 139568 kB' 'Active(file): 1374388 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 436 kB' 'Writeback: 0 kB' 'AnonPages: 158016 kB' 'Mapped: 67672 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301680 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67136 kB' 'KernelStack: 4356 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.748 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.748 10:30:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.749 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.749 10:30:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.750 10:30:50 -- setup/common.sh@33 -- # echo 0 00:05:23.750 10:30:50 -- setup/common.sh@33 -- # return 0 00:05:23.750 10:30:50 -- setup/hugepages.sh@99 -- # surp=0 00:05:23.750 10:30:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.750 10:30:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.750 10:30:50 -- setup/common.sh@18 -- # local node= 00:05:23.750 10:30:50 -- setup/common.sh@19 -- # local var val 00:05:23.750 10:30:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.750 10:30:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.750 10:30:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.750 10:30:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.750 10:30:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.750 10:30:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4221140 kB' 'MemAvailable: 9487844 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375456 kB' 'Inactive: 4118444 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139368 kB' 'Active(file): 1374388 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 436 kB' 'Writeback: 0 kB' 'AnonPages: 158180 kB' 'Mapped: 67592 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301680 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67136 kB' 'KernelStack: 4356 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.750 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.750 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.751 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.751 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.752 10:30:50 -- setup/common.sh@33 -- # echo 0 00:05:23.752 10:30:50 -- setup/common.sh@33 -- # return 0 00:05:23.752 10:30:50 -- setup/hugepages.sh@100 -- # resv=0 00:05:23.752 10:30:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:23.752 nr_hugepages=1025 00:05:23.752 10:30:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.752 resv_hugepages=0 00:05:23.752 10:30:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.752 surplus_hugepages=0 00:05:23.752 10:30:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.752 anon_hugepages=0 00:05:23.752 10:30:50 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:23.752 10:30:50 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:23.752 10:30:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.752 10:30:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.752 10:30:50 -- setup/common.sh@18 -- # local node= 00:05:23.752 10:30:50 -- setup/common.sh@19 -- # local var val 00:05:23.752 10:30:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:23.752 10:30:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.752 10:30:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.752 10:30:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.752 10:30:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.752 10:30:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4221140 kB' 'MemAvailable: 9487844 kB' 'Buffers: 40272 kB' 'Cached: 5324852 kB' 'SwapCached: 0 kB' 'Active: 1375456 kB' 'Inactive: 4118764 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139688 kB' 'Active(file): 1374388 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 436 kB' 'Writeback: 0 kB' 'AnonPages: 158212 kB' 'Mapped: 67592 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301680 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67136 kB' 'KernelStack: 4372 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.752 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.752 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.753 10:30:50 -- setup/common.sh@32 -- # continue 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:23.753 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.012 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.012 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.012 10:30:50 -- setup/common.sh@33 -- # echo 1025 00:05:24.012 10:30:50 -- setup/common.sh@33 -- # return 0 00:05:24.012 10:30:50 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:24.012 10:30:50 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.012 10:30:50 -- setup/hugepages.sh@27 -- # local node 00:05:24.012 10:30:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.012 10:30:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:24.012 10:30:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.012 10:30:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.012 10:30:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.012 10:30:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.012 10:30:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.012 10:30:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.012 10:30:50 -- setup/common.sh@18 -- # local node=0 00:05:24.012 10:30:50 -- setup/common.sh@19 -- # local var val 00:05:24.012 10:30:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.012 10:30:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.012 10:30:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.013 10:30:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.013 10:30:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.013 10:30:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4221140 kB' 'MemUsed: 8021836 kB' 'SwapCached: 0 kB' 'Active: 1375456 kB' 'Inactive: 4118788 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 139712 kB' 'Active(file): 1374388 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 436 kB' 'Writeback: 0 kB' 'FilePages: 5365124 kB' 'Mapped: 67592 kB' 'AnonPages: 158280 kB' 'Shmem: 2596 kB' 'KernelStack: 4404 kB' 'PageTables: 3744 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301680 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # continue 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.013 10:30:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.013 10:30:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.013 10:30:50 -- setup/common.sh@33 -- # echo 0 00:05:24.013 10:30:50 -- setup/common.sh@33 -- # return 0 00:05:24.013 10:30:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.013 10:30:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.013 10:30:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.014 10:30:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.014 10:30:50 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:24.014 node0=1025 expecting 1025 00:05:24.014 10:30:50 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:24.014 00:05:24.014 real 0m1.329s 00:05:24.014 user 0m0.327s 00:05:24.014 sys 0m0.929s 00:05:24.014 10:30:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.014 10:30:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.014 ************************************ 00:05:24.014 END TEST odd_alloc 00:05:24.014 ************************************ 00:05:24.014 10:30:50 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:24.014 10:30:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.014 10:30:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.014 10:30:50 -- common/autotest_common.sh@10 -- # set +x 00:05:24.014 ************************************ 00:05:24.014 START TEST custom_alloc 00:05:24.014 ************************************ 00:05:24.014 10:30:50 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:24.014 10:30:50 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:24.014 10:30:50 -- setup/hugepages.sh@169 -- # local node 00:05:24.014 10:30:50 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:24.014 10:30:50 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:24.014 10:30:50 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:24.014 10:30:50 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:24.014 10:30:50 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:24.014 10:30:50 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:24.014 10:30:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:24.014 10:30:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.014 10:30:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.014 10:30:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:24.014 10:30:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.014 10:30:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.014 10:30:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.014 10:30:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:24.014 10:30:50 -- setup/hugepages.sh@83 -- # : 0 00:05:24.014 10:30:50 -- setup/hugepages.sh@84 -- # : 0 00:05:24.014 10:30:50 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:24.014 10:30:50 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:24.014 10:30:50 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:24.014 10:30:50 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:24.014 10:30:50 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:24.014 10:30:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:24.014 10:30:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:24.014 10:30:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:24.014 10:30:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:24.014 10:30:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:24.014 10:30:50 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:24.014 10:30:50 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:24.014 10:30:50 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:24.014 10:30:50 -- setup/hugepages.sh@78 -- # return 0 00:05:24.014 10:30:50 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:24.014 10:30:50 -- setup/hugepages.sh@187 -- # setup output 00:05:24.014 10:30:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.014 10:30:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:24.272 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.840 10:30:51 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:24.840 10:30:51 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:24.840 10:30:51 -- setup/hugepages.sh@89 -- # local node 00:05:24.840 10:30:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.840 10:30:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.840 10:30:51 -- setup/hugepages.sh@92 -- # local surp 00:05:24.840 10:30:51 -- setup/hugepages.sh@93 -- # local resv 00:05:24.840 10:30:51 -- setup/hugepages.sh@94 -- # local anon 00:05:24.840 10:30:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.840 10:30:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.840 10:30:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.840 10:30:51 -- setup/common.sh@18 -- # local node= 00:05:24.840 10:30:51 -- setup/common.sh@19 -- # local var val 00:05:24.840 10:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.840 10:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.840 10:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.840 10:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.840 10:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.840 10:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5270424 kB' 'MemAvailable: 10537140 kB' 'Buffers: 40280 kB' 'Cached: 5324856 kB' 'SwapCached: 0 kB' 'Active: 1375480 kB' 'Inactive: 4118504 kB' 'Active(anon): 1080 kB' 'Inactive(anon): 139428 kB' 'Active(file): 1374400 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 158324 kB' 'Mapped: 67216 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301604 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67060 kB' 'KernelStack: 4280 kB' 'PageTables: 3296 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.840 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.840 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.841 10:30:51 -- setup/common.sh@33 -- # echo 0 00:05:24.841 10:30:51 -- setup/common.sh@33 -- # return 0 00:05:24.841 10:30:51 -- setup/hugepages.sh@97 -- # anon=0 00:05:24.841 10:30:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.841 10:30:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.841 10:30:51 -- setup/common.sh@18 -- # local node= 00:05:24.841 10:30:51 -- setup/common.sh@19 -- # local var val 00:05:24.841 10:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.841 10:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.841 10:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.841 10:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.841 10:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.841 10:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5270172 kB' 'MemAvailable: 10536888 kB' 'Buffers: 40280 kB' 'Cached: 5324856 kB' 'SwapCached: 0 kB' 'Active: 1375464 kB' 'Inactive: 4118572 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139496 kB' 'Active(file): 1374400 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 158256 kB' 'Mapped: 66956 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301596 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67052 kB' 'KernelStack: 4300 kB' 'PageTables: 3600 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.841 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.841 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.842 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.842 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.843 10:30:51 -- setup/common.sh@33 -- # echo 0 00:05:24.843 10:30:51 -- setup/common.sh@33 -- # return 0 00:05:24.843 10:30:51 -- setup/hugepages.sh@99 -- # surp=0 00:05:24.843 10:30:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.843 10:30:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.843 10:30:51 -- setup/common.sh@18 -- # local node= 00:05:24.843 10:30:51 -- setup/common.sh@19 -- # local var val 00:05:24.843 10:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.843 10:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.843 10:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.843 10:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.843 10:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.843 10:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5270424 kB' 'MemAvailable: 10537140 kB' 'Buffers: 40280 kB' 'Cached: 5324856 kB' 'SwapCached: 0 kB' 'Active: 1375468 kB' 'Inactive: 4118320 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139248 kB' 'Active(file): 1374404 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 157960 kB' 'Mapped: 66988 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301588 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67044 kB' 'KernelStack: 4272 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.843 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.843 10:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.844 10:30:51 -- setup/common.sh@33 -- # echo 0 00:05:24.844 10:30:51 -- setup/common.sh@33 -- # return 0 00:05:24.844 10:30:51 -- setup/hugepages.sh@100 -- # resv=0 00:05:24.844 10:30:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:24.844 nr_hugepages=512 00:05:24.844 10:30:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.844 resv_hugepages=0 00:05:24.844 10:30:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.844 surplus_hugepages=0 00:05:24.844 10:30:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.844 anon_hugepages=0 00:05:24.844 10:30:51 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.844 10:30:51 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:24.844 10:30:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.844 10:30:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.844 10:30:51 -- setup/common.sh@18 -- # local node= 00:05:24.844 10:30:51 -- setup/common.sh@19 -- # local var val 00:05:24.844 10:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.844 10:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.844 10:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.844 10:30:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.844 10:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.844 10:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5270424 kB' 'MemAvailable: 10537140 kB' 'Buffers: 40280 kB' 'Cached: 5324856 kB' 'SwapCached: 0 kB' 'Active: 1375468 kB' 'Inactive: 4118276 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139204 kB' 'Active(file): 1374404 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'AnonPages: 157932 kB' 'Mapped: 66988 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301588 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67044 kB' 'KernelStack: 4272 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.844 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.844 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.845 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.845 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.845 10:30:51 -- setup/common.sh@33 -- # echo 512 00:05:24.845 10:30:51 -- setup/common.sh@33 -- # return 0 00:05:24.845 10:30:51 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:24.845 10:30:51 -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.845 10:30:51 -- setup/hugepages.sh@27 -- # local node 00:05:24.845 10:30:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.845 10:30:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:24.845 10:30:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.846 10:30:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.846 10:30:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.846 10:30:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.846 10:30:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.846 10:30:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.846 10:30:51 -- setup/common.sh@18 -- # local node=0 00:05:24.846 10:30:51 -- setup/common.sh@19 -- # local var val 00:05:24.846 10:30:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:24.846 10:30:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.846 10:30:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.846 10:30:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.846 10:30:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.846 10:30:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 5270928 kB' 'MemUsed: 6972048 kB' 'SwapCached: 0 kB' 'Active: 1375468 kB' 'Inactive: 4118396 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139324 kB' 'Active(file): 1374404 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 92 kB' 'Writeback: 0 kB' 'FilePages: 5365136 kB' 'Mapped: 66988 kB' 'AnonPages: 158076 kB' 'Shmem: 2596 kB' 'KernelStack: 4304 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301596 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.846 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.846 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.847 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.847 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.847 10:30:51 -- setup/common.sh@32 -- # continue 00:05:24.847 10:30:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:24.847 10:30:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:24.847 10:30:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.847 10:30:51 -- setup/common.sh@33 -- # echo 0 00:05:24.847 10:30:51 -- setup/common.sh@33 -- # return 0 00:05:25.107 10:30:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.107 10:30:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.107 10:30:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.107 10:30:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.107 10:30:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:25.107 node0=512 expecting 512 00:05:25.107 10:30:51 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:25.107 00:05:25.107 real 0m0.968s 00:05:25.107 user 0m0.349s 00:05:25.107 sys 0m0.554s 00:05:25.107 10:30:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.107 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:25.107 ************************************ 00:05:25.107 END TEST custom_alloc 00:05:25.107 ************************************ 00:05:25.107 10:30:51 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:25.107 10:30:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.107 10:30:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.107 10:30:51 -- common/autotest_common.sh@10 -- # set +x 00:05:25.107 ************************************ 00:05:25.107 START TEST no_shrink_alloc 00:05:25.107 ************************************ 00:05:25.107 10:30:51 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:25.107 10:30:51 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:25.107 10:30:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:25.107 10:30:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:25.107 10:30:51 -- setup/hugepages.sh@51 -- # shift 00:05:25.107 10:30:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:25.107 10:30:51 -- setup/hugepages.sh@52 -- # local node_ids 00:05:25.107 10:30:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:25.107 10:30:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:25.107 10:30:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:25.107 10:30:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:25.107 10:30:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:25.107 10:30:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:25.107 10:30:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:25.107 10:30:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:25.107 10:30:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:25.107 10:30:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:25.107 10:30:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:25.107 10:30:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:25.107 10:30:51 -- setup/hugepages.sh@73 -- # return 0 00:05:25.107 10:30:51 -- setup/hugepages.sh@198 -- # setup output 00:05:25.107 10:30:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.107 10:30:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.366 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:25.366 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.305 10:30:52 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:26.305 10:30:52 -- setup/hugepages.sh@89 -- # local node 00:05:26.305 10:30:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.305 10:30:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.305 10:30:52 -- setup/hugepages.sh@92 -- # local surp 00:05:26.305 10:30:52 -- setup/hugepages.sh@93 -- # local resv 00:05:26.305 10:30:52 -- setup/hugepages.sh@94 -- # local anon 00:05:26.305 10:30:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.305 10:30:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.305 10:30:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.305 10:30:52 -- setup/common.sh@18 -- # local node= 00:05:26.305 10:30:52 -- setup/common.sh@19 -- # local var val 00:05:26.305 10:30:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.305 10:30:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.305 10:30:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.305 10:30:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.305 10:30:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.305 10:30:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.305 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.305 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.305 10:30:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4215320 kB' 'MemAvailable: 9482040 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375468 kB' 'Inactive: 4118468 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139392 kB' 'Active(file): 1374404 kB' 'Inactive(file): 3979076 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 157968 kB' 'Mapped: 66956 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301668 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67124 kB' 'KernelStack: 4304 kB' 'PageTables: 3432 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.305 10:30:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.306 10:30:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.306 10:30:52 -- setup/common.sh@33 -- # echo 0 00:05:26.306 10:30:52 -- setup/common.sh@33 -- # return 0 00:05:26.306 10:30:52 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.306 10:30:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.306 10:30:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.306 10:30:52 -- setup/common.sh@18 -- # local node= 00:05:26.306 10:30:52 -- setup/common.sh@19 -- # local var val 00:05:26.306 10:30:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.306 10:30:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.306 10:30:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.306 10:30:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.306 10:30:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.306 10:30:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.306 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4215320 kB' 'MemAvailable: 9482040 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118404 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139332 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 157964 kB' 'Mapped: 66956 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301668 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67124 kB' 'KernelStack: 4288 kB' 'PageTables: 3388 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.307 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.307 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.308 10:30:52 -- setup/common.sh@33 -- # echo 0 00:05:26.308 10:30:52 -- setup/common.sh@33 -- # return 0 00:05:26.308 10:30:52 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.308 10:30:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.308 10:30:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.308 10:30:52 -- setup/common.sh@18 -- # local node= 00:05:26.308 10:30:52 -- setup/common.sh@19 -- # local var val 00:05:26.308 10:30:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.308 10:30:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.308 10:30:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.308 10:30:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.308 10:30:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.308 10:30:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4215320 kB' 'MemAvailable: 9482040 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118384 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139312 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 157940 kB' 'Mapped: 66956 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301668 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67124 kB' 'KernelStack: 4272 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.308 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.308 10:30:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.309 10:30:52 -- setup/common.sh@33 -- # echo 0 00:05:26.309 10:30:52 -- setup/common.sh@33 -- # return 0 00:05:26.309 10:30:52 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.309 10:30:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.309 nr_hugepages=1024 00:05:26.309 10:30:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.309 resv_hugepages=0 00:05:26.309 10:30:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.309 surplus_hugepages=0 00:05:26.309 10:30:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.309 anon_hugepages=0 00:05:26.309 10:30:52 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.309 10:30:52 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.309 10:30:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.309 10:30:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.309 10:30:52 -- setup/common.sh@18 -- # local node= 00:05:26.309 10:30:52 -- setup/common.sh@19 -- # local var val 00:05:26.309 10:30:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.309 10:30:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.309 10:30:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.309 10:30:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.309 10:30:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.309 10:30:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4215320 kB' 'MemAvailable: 9482040 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118360 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139288 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 157912 kB' 'Mapped: 66956 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301668 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67124 kB' 'KernelStack: 4256 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19364 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.309 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.309 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.310 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.310 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.311 10:30:52 -- setup/common.sh@33 -- # echo 1024 00:05:26.311 10:30:52 -- setup/common.sh@33 -- # return 0 00:05:26.311 10:30:52 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.311 10:30:52 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.311 10:30:52 -- setup/hugepages.sh@27 -- # local node 00:05:26.311 10:30:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.311 10:30:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.311 10:30:52 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.311 10:30:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.311 10:30:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.311 10:30:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.311 10:30:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.311 10:30:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.311 10:30:52 -- setup/common.sh@18 -- # local node=0 00:05:26.311 10:30:52 -- setup/common.sh@19 -- # local var val 00:05:26.311 10:30:52 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.311 10:30:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.311 10:30:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.311 10:30:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.311 10:30:52 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.311 10:30:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4215320 kB' 'MemUsed: 8027656 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118156 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139084 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'FilePages: 5365140 kB' 'Mapped: 66956 kB' 'AnonPages: 157932 kB' 'Shmem: 2596 kB' 'KernelStack: 4272 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301668 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67124 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.311 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.311 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # continue 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.312 10:30:52 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.312 10:30:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.312 10:30:52 -- setup/common.sh@33 -- # echo 0 00:05:26.312 10:30:52 -- setup/common.sh@33 -- # return 0 00:05:26.312 10:30:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.312 10:30:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.312 10:30:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.312 10:30:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.312 10:30:52 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.312 node0=1024 expecting 1024 00:05:26.312 10:30:52 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.312 10:30:52 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:26.312 10:30:52 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:26.312 10:30:52 -- setup/hugepages.sh@202 -- # setup output 00:05:26.312 10:30:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.312 10:30:52 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:26.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:26.570 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:26.570 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:26.570 10:30:53 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:26.570 10:30:53 -- setup/hugepages.sh@89 -- # local node 00:05:26.570 10:30:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:26.570 10:30:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:26.570 10:30:53 -- setup/hugepages.sh@92 -- # local surp 00:05:26.570 10:30:53 -- setup/hugepages.sh@93 -- # local resv 00:05:26.570 10:30:53 -- setup/hugepages.sh@94 -- # local anon 00:05:26.570 10:30:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:26.570 10:30:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:26.570 10:30:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:26.570 10:30:53 -- setup/common.sh@18 -- # local node= 00:05:26.570 10:30:53 -- setup/common.sh@19 -- # local var val 00:05:26.570 10:30:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.570 10:30:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.570 10:30:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.570 10:30:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.570 10:30:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.570 10:30:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.570 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4214136 kB' 'MemAvailable: 9480856 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375480 kB' 'Inactive: 4119500 kB' 'Active(anon): 1072 kB' 'Inactive(anon): 140428 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 158340 kB' 'Mapped: 66956 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301668 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67124 kB' 'KernelStack: 4392 kB' 'PageTables: 4208 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 497824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.571 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.571 10:30:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.858 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.858 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.858 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.858 10:30:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.858 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.858 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.858 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.858 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.858 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.858 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.858 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.859 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.859 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.860 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.860 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.861 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.861 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:26.862 10:30:53 -- setup/common.sh@33 -- # echo 0 00:05:26.862 10:30:53 -- setup/common.sh@33 -- # return 0 00:05:26.862 10:30:53 -- setup/hugepages.sh@97 -- # anon=0 00:05:26.862 10:30:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:26.862 10:30:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.862 10:30:53 -- setup/common.sh@18 -- # local node= 00:05:26.862 10:30:53 -- setup/common.sh@19 -- # local var val 00:05:26.862 10:30:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.862 10:30:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.862 10:30:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.862 10:30:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.862 10:30:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.862 10:30:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4214368 kB' 'MemAvailable: 9481088 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118920 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139848 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 158080 kB' 'Mapped: 67024 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301896 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67352 kB' 'KernelStack: 4312 kB' 'PageTables: 3828 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.862 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.862 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.863 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.863 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.864 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.864 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.865 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.865 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.866 10:30:53 -- setup/common.sh@33 -- # echo 0 00:05:26.866 10:30:53 -- setup/common.sh@33 -- # return 0 00:05:26.866 10:30:53 -- setup/hugepages.sh@99 -- # surp=0 00:05:26.866 10:30:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:26.866 10:30:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:26.866 10:30:53 -- setup/common.sh@18 -- # local node= 00:05:26.866 10:30:53 -- setup/common.sh@19 -- # local var val 00:05:26.866 10:30:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.866 10:30:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.866 10:30:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.866 10:30:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.866 10:30:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.866 10:30:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4214084 kB' 'MemAvailable: 9480804 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118928 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139856 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 158632 kB' 'Mapped: 67008 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301896 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67352 kB' 'KernelStack: 4360 kB' 'PageTables: 3604 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19348 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.866 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.866 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:26.867 10:30:53 -- setup/common.sh@33 -- # echo 0 00:05:26.867 10:30:53 -- setup/common.sh@33 -- # return 0 00:05:26.867 10:30:53 -- setup/hugepages.sh@100 -- # resv=0 00:05:26.867 nr_hugepages=1024 00:05:26.867 10:30:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:26.867 resv_hugepages=0 00:05:26.867 10:30:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:26.867 surplus_hugepages=0 00:05:26.867 10:30:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:26.867 anon_hugepages=0 00:05:26.867 10:30:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:26.867 10:30:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.867 10:30:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:26.867 10:30:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:26.867 10:30:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:26.867 10:30:53 -- setup/common.sh@18 -- # local node= 00:05:26.867 10:30:53 -- setup/common.sh@19 -- # local var val 00:05:26.867 10:30:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.867 10:30:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.867 10:30:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:26.867 10:30:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:26.867 10:30:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.867 10:30:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4214348 kB' 'MemAvailable: 9481068 kB' 'Buffers: 40280 kB' 'Cached: 5324860 kB' 'SwapCached: 0 kB' 'Active: 1375464 kB' 'Inactive: 4118768 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 139696 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'AnonPages: 158460 kB' 'Mapped: 67008 kB' 'Shmem: 2596 kB' 'KReclaimable: 234544 kB' 'Slab: 301896 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67352 kB' 'KernelStack: 4328 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19380 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 155500 kB' 'DirectMap2M: 4038656 kB' 'DirectMap1G: 10485760 kB' 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.867 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.867 10:30:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.868 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.868 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:26.868 10:30:53 -- setup/common.sh@33 -- # echo 1024 00:05:26.868 10:30:53 -- setup/common.sh@33 -- # return 0 00:05:26.868 10:30:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:26.868 10:30:53 -- setup/hugepages.sh@112 -- # get_nodes 00:05:26.869 10:30:53 -- setup/hugepages.sh@27 -- # local node 00:05:26.869 10:30:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:26.869 10:30:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:26.869 10:30:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:26.869 10:30:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:26.869 10:30:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:26.869 10:30:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:26.869 10:30:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:26.869 10:30:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:26.869 10:30:53 -- setup/common.sh@18 -- # local node=0 00:05:26.869 10:30:53 -- setup/common.sh@19 -- # local var val 00:05:26.869 10:30:53 -- setup/common.sh@20 -- # local mem_f mem 00:05:26.869 10:30:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:26.869 10:30:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:26.869 10:30:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:26.869 10:30:53 -- setup/common.sh@28 -- # mapfile -t mem 00:05:26.869 10:30:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242976 kB' 'MemFree: 4214600 kB' 'MemUsed: 8028376 kB' 'SwapCached: 0 kB' 'Active: 1375472 kB' 'Inactive: 4118320 kB' 'Active(anon): 1064 kB' 'Inactive(anon): 139248 kB' 'Active(file): 1374408 kB' 'Inactive(file): 3979072 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 88 kB' 'Writeback: 0 kB' 'FilePages: 5365140 kB' 'Mapped: 67032 kB' 'AnonPages: 157788 kB' 'Shmem: 2596 kB' 'KernelStack: 4380 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 234544 kB' 'Slab: 301896 kB' 'SReclaimable: 234544 kB' 'SUnreclaim: 67352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.869 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.869 10:30:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # continue 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # IFS=': ' 00:05:26.870 10:30:53 -- setup/common.sh@31 -- # read -r var val _ 00:05:26.870 10:30:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:26.870 10:30:53 -- setup/common.sh@33 -- # echo 0 00:05:26.870 10:30:53 -- setup/common.sh@33 -- # return 0 00:05:26.870 10:30:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:26.870 10:30:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:26.870 10:30:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:26.870 10:30:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:26.870 node0=1024 expecting 1024 00:05:26.870 10:30:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:26.870 10:30:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:26.870 00:05:26.870 real 0m1.824s 00:05:26.870 user 0m0.622s 00:05:26.870 sys 0m1.149s 00:05:26.870 10:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.870 10:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 ************************************ 00:05:26.870 END TEST no_shrink_alloc 00:05:26.870 ************************************ 00:05:26.870 10:30:53 -- setup/hugepages.sh@217 -- # clear_hp 00:05:26.870 10:30:53 -- setup/hugepages.sh@37 -- # local node hp 00:05:26.870 10:30:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:26.870 10:30:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:26.870 10:30:53 -- setup/hugepages.sh@41 -- # echo 0 00:05:26.870 10:30:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:26.870 10:30:53 -- setup/hugepages.sh@41 -- # echo 0 00:05:26.870 10:30:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:26.870 10:30:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:26.870 00:05:26.870 real 0m7.890s 00:05:26.870 user 0m2.458s 00:05:26.870 sys 0m5.269s 00:05:26.870 10:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.870 10:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 ************************************ 00:05:26.870 END TEST hugepages 00:05:26.870 ************************************ 00:05:26.870 10:30:53 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:26.870 10:30:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.870 10:30:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.870 10:30:53 -- common/autotest_common.sh@10 -- # set +x 00:05:26.870 ************************************ 00:05:26.870 START TEST driver 00:05:26.870 ************************************ 00:05:26.870 10:30:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:27.131 * Looking for test storage... 00:05:27.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:27.131 10:30:53 -- setup/driver.sh@68 -- # setup reset 00:05:27.131 10:30:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:27.131 10:30:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:27.389 10:30:54 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:27.389 10:30:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.389 10:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.389 10:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:27.389 ************************************ 00:05:27.389 START TEST guess_driver 00:05:27.389 ************************************ 00:05:27.389 10:30:54 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:27.389 10:30:54 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:27.389 10:30:54 -- setup/driver.sh@47 -- # local fail=0 00:05:27.389 10:30:54 -- setup/driver.sh@49 -- # pick_driver 00:05:27.389 10:30:54 -- setup/driver.sh@36 -- # vfio 00:05:27.389 10:30:54 -- setup/driver.sh@21 -- # local iommu_grups 00:05:27.389 10:30:54 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:27.389 10:30:54 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:27.389 10:30:54 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:27.389 10:30:54 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:27.389 10:30:54 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:27.389 10:30:54 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:27.389 10:30:54 -- setup/driver.sh@32 -- # return 1 00:05:27.389 10:30:54 -- setup/driver.sh@38 -- # uio 00:05:27.389 10:30:54 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:27.389 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:27.389 10:30:54 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:27.389 Looking for driver=uio_pci_generic 00:05:27.389 10:30:54 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:27.389 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.389 10:30:54 -- setup/driver.sh@45 -- # setup output config 00:05:27.389 10:30:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.389 10:30:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.955 10:30:54 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:27.955 10:30:54 -- setup/driver.sh@58 -- # continue 00:05:27.955 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:27.955 10:30:54 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:27.955 10:30:54 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:27.955 10:30:54 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:28.888 10:30:55 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:28.888 10:30:55 -- setup/driver.sh@65 -- # setup reset 00:05:28.888 10:30:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:28.888 10:30:55 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.454 00:05:29.454 real 0m1.979s 00:05:29.454 user 0m0.452s 00:05:29.454 sys 0m1.515s 00:05:29.454 10:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.454 ************************************ 00:05:29.454 END TEST guess_driver 00:05:29.454 ************************************ 00:05:29.454 10:30:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.454 00:05:29.454 real 0m2.543s 00:05:29.454 user 0m0.736s 00:05:29.454 sys 0m1.814s 00:05:29.454 10:30:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.454 10:30:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.454 ************************************ 00:05:29.454 END TEST driver 00:05:29.454 ************************************ 00:05:29.454 10:30:56 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:29.454 10:30:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.454 10:30:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.454 10:30:56 -- common/autotest_common.sh@10 -- # set +x 00:05:29.454 ************************************ 00:05:29.454 START TEST devices 00:05:29.454 ************************************ 00:05:29.454 10:30:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:29.712 * Looking for test storage... 00:05:29.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:29.712 10:30:56 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:29.712 10:30:56 -- setup/devices.sh@192 -- # setup reset 00:05:29.712 10:30:56 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:29.712 10:30:56 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:29.970 10:30:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:29.970 10:30:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:29.970 10:30:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:29.970 10:30:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:29.970 10:30:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:29.970 10:30:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:29.970 10:30:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:29.970 10:30:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:29.970 10:30:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:29.970 10:30:56 -- setup/devices.sh@196 -- # blocks=() 00:05:29.970 10:30:56 -- setup/devices.sh@196 -- # declare -a blocks 00:05:29.970 10:30:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:29.970 10:30:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:29.970 10:30:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:29.970 10:30:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:29.970 10:30:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:29.970 10:30:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:29.970 10:30:56 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:29.970 10:30:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:29.970 10:30:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:29.970 10:30:56 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:29.970 10:30:56 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:30.228 No valid GPT data, bailing 00:05:30.228 10:30:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.228 10:30:56 -- scripts/common.sh@393 -- # pt= 00:05:30.228 10:30:56 -- scripts/common.sh@394 -- # return 1 00:05:30.228 10:30:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:30.228 10:30:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:30.228 10:30:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:30.228 10:30:56 -- setup/common.sh@80 -- # echo 5368709120 00:05:30.228 10:30:56 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:30.228 10:30:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:30.228 10:30:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:30.228 10:30:56 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:30.228 10:30:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:30.228 10:30:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:30.228 10:30:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.228 10:30:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.228 10:30:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.228 ************************************ 00:05:30.228 START TEST nvme_mount 00:05:30.228 ************************************ 00:05:30.228 10:30:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:30.228 10:30:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:30.228 10:30:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:30.228 10:30:56 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.228 10:30:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:30.228 10:30:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:30.228 10:30:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.228 10:30:56 -- setup/common.sh@40 -- # local part_no=1 00:05:30.228 10:30:56 -- setup/common.sh@41 -- # local size=1073741824 00:05:30.228 10:30:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.228 10:30:56 -- setup/common.sh@44 -- # parts=() 00:05:30.228 10:30:56 -- setup/common.sh@44 -- # local parts 00:05:30.228 10:30:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.228 10:30:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.228 10:30:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.228 10:30:56 -- setup/common.sh@46 -- # (( part++ )) 00:05:30.228 10:30:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.228 10:30:56 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:30.228 10:30:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.228 10:30:56 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:31.163 Creating new GPT entries in memory. 00:05:31.163 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.163 other utilities. 00:05:31.163 10:30:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.163 10:30:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.163 10:30:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.163 10:30:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.163 10:30:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:32.538 Creating new GPT entries in memory. 00:05:32.538 The operation has completed successfully. 00:05:32.538 10:30:58 -- setup/common.sh@57 -- # (( part++ )) 00:05:32.538 10:30:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.538 10:30:58 -- setup/common.sh@62 -- # wait 108293 00:05:32.538 10:30:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.538 10:30:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:32.538 10:30:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.538 10:30:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:32.538 10:30:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:32.538 10:30:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.538 10:30:58 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.538 10:30:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:32.538 10:30:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:32.538 10:30:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:32.538 10:30:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:32.538 10:30:58 -- setup/devices.sh@53 -- # local found=0 00:05:32.538 10:30:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.538 10:30:58 -- setup/devices.sh@56 -- # : 00:05:32.538 10:30:58 -- setup/devices.sh@59 -- # local pci status 00:05:32.538 10:30:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.538 10:30:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:32.538 10:30:58 -- setup/devices.sh@47 -- # setup output config 00:05:32.538 10:30:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.538 10:30:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.538 10:30:59 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.538 10:30:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:32.538 10:30:59 -- setup/devices.sh@63 -- # found=1 00:05:32.538 10:30:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.538 10:30:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.538 10:30:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.796 10:30:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.796 10:30:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.298 10:31:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.298 10:31:00 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:34.298 10:31:00 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.299 10:31:00 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.299 10:31:00 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.299 10:31:00 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:34.299 10:31:00 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.299 10:31:00 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.299 10:31:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.299 10:31:00 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.299 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.299 10:31:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.299 10:31:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.299 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:34.299 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:34.299 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.299 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.299 10:31:00 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:34.299 10:31:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:34.299 10:31:00 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.299 10:31:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:34.299 10:31:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:34.299 10:31:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.299 10:31:00 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.299 10:31:00 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:34.299 10:31:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:34.299 10:31:00 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.299 10:31:00 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:34.299 10:31:00 -- setup/devices.sh@53 -- # local found=0 00:05:34.299 10:31:00 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:34.299 10:31:00 -- setup/devices.sh@56 -- # : 00:05:34.299 10:31:00 -- setup/devices.sh@59 -- # local pci status 00:05:34.299 10:31:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.299 10:31:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:34.299 10:31:00 -- setup/devices.sh@47 -- # setup output config 00:05:34.299 10:31:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.299 10:31:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.557 10:31:00 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.557 10:31:00 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:34.557 10:31:00 -- setup/devices.sh@63 -- # found=1 00:05:34.557 10:31:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.557 10:31:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.557 10:31:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.557 10:31:01 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.557 10:31:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.930 10:31:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:35.930 10:31:02 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:35.930 10:31:02 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.930 10:31:02 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:35.930 10:31:02 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:35.930 10:31:02 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:35.930 10:31:02 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:35.930 10:31:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:35.930 10:31:02 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:35.930 10:31:02 -- setup/devices.sh@50 -- # local mount_point= 00:05:35.930 10:31:02 -- setup/devices.sh@51 -- # local test_file= 00:05:35.930 10:31:02 -- setup/devices.sh@53 -- # local found=0 00:05:35.930 10:31:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:35.930 10:31:02 -- setup/devices.sh@59 -- # local pci status 00:05:35.930 10:31:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:35.930 10:31:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:35.930 10:31:02 -- setup/devices.sh@47 -- # setup output config 00:05:35.930 10:31:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.930 10:31:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.496 10:31:02 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.496 10:31:02 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:36.496 10:31:02 -- setup/devices.sh@63 -- # found=1 00:05:36.496 10:31:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.496 10:31:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.496 10:31:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.496 10:31:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:36.496 10:31:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.429 10:31:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.429 10:31:04 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:37.429 10:31:04 -- setup/devices.sh@68 -- # return 0 00:05:37.429 10:31:04 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:37.429 10:31:04 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:37.429 10:31:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:37.429 10:31:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:37.429 10:31:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:37.429 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:37.429 00:05:37.429 real 0m7.334s 00:05:37.429 user 0m0.727s 00:05:37.429 sys 0m4.461s 00:05:37.429 10:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.429 10:31:04 -- common/autotest_common.sh@10 -- # set +x 00:05:37.429 ************************************ 00:05:37.429 END TEST nvme_mount 00:05:37.429 ************************************ 00:05:37.429 10:31:04 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:37.429 10:31:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.429 10:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.429 10:31:04 -- common/autotest_common.sh@10 -- # set +x 00:05:37.687 ************************************ 00:05:37.687 START TEST dm_mount 00:05:37.687 ************************************ 00:05:37.687 10:31:04 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:37.687 10:31:04 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:37.687 10:31:04 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:37.687 10:31:04 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:37.687 10:31:04 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:37.687 10:31:04 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:37.687 10:31:04 -- setup/common.sh@40 -- # local part_no=2 00:05:37.687 10:31:04 -- setup/common.sh@41 -- # local size=1073741824 00:05:37.687 10:31:04 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:37.687 10:31:04 -- setup/common.sh@44 -- # parts=() 00:05:37.687 10:31:04 -- setup/common.sh@44 -- # local parts 00:05:37.687 10:31:04 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:37.687 10:31:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.687 10:31:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.687 10:31:04 -- setup/common.sh@46 -- # (( part++ )) 00:05:37.687 10:31:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.687 10:31:04 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:37.687 10:31:04 -- setup/common.sh@46 -- # (( part++ )) 00:05:37.687 10:31:04 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:37.687 10:31:04 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:37.687 10:31:04 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:37.687 10:31:04 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:38.622 Creating new GPT entries in memory. 00:05:38.622 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:38.622 other utilities. 00:05:38.622 10:31:05 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:38.622 10:31:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:38.622 10:31:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:38.622 10:31:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:38.622 10:31:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:39.555 Creating new GPT entries in memory. 00:05:39.555 The operation has completed successfully. 00:05:39.555 10:31:06 -- setup/common.sh@57 -- # (( part++ )) 00:05:39.555 10:31:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:39.555 10:31:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:39.555 10:31:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:39.555 10:31:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:40.928 The operation has completed successfully. 00:05:40.928 10:31:07 -- setup/common.sh@57 -- # (( part++ )) 00:05:40.928 10:31:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:40.928 10:31:07 -- setup/common.sh@62 -- # wait 108792 00:05:40.928 10:31:07 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:40.928 10:31:07 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.928 10:31:07 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.928 10:31:07 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:40.928 10:31:07 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:40.928 10:31:07 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.928 10:31:07 -- setup/devices.sh@161 -- # break 00:05:40.928 10:31:07 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.928 10:31:07 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:40.928 10:31:07 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:40.928 10:31:07 -- setup/devices.sh@166 -- # dm=dm-0 00:05:40.928 10:31:07 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:40.928 10:31:07 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:40.928 10:31:07 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.928 10:31:07 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:40.928 10:31:07 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.928 10:31:07 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:40.928 10:31:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:40.928 10:31:07 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.928 10:31:07 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.928 10:31:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:40.928 10:31:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:40.928 10:31:07 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:40.928 10:31:07 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:40.928 10:31:07 -- setup/devices.sh@53 -- # local found=0 00:05:40.928 10:31:07 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:40.928 10:31:07 -- setup/devices.sh@56 -- # : 00:05:40.928 10:31:07 -- setup/devices.sh@59 -- # local pci status 00:05:40.928 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.928 10:31:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:40.928 10:31:07 -- setup/devices.sh@47 -- # setup output config 00:05:40.928 10:31:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.928 10:31:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:40.929 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:40.929 10:31:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:40.929 10:31:07 -- setup/devices.sh@63 -- # found=1 00:05:40.929 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.929 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:40.929 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.189 10:31:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:41.189 10:31:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.158 10:31:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:42.158 10:31:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:42.158 10:31:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.158 10:31:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:42.158 10:31:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:42.158 10:31:08 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:42.158 10:31:08 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:42.158 10:31:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:42.158 10:31:08 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:42.158 10:31:08 -- setup/devices.sh@50 -- # local mount_point= 00:05:42.158 10:31:08 -- setup/devices.sh@51 -- # local test_file= 00:05:42.158 10:31:08 -- setup/devices.sh@53 -- # local found=0 00:05:42.158 10:31:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:42.158 10:31:08 -- setup/devices.sh@59 -- # local pci status 00:05:42.158 10:31:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.158 10:31:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:42.158 10:31:08 -- setup/devices.sh@47 -- # setup output config 00:05:42.158 10:31:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.158 10:31:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.418 10:31:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.418 10:31:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:42.418 10:31:08 -- setup/devices.sh@63 -- # found=1 00:05:42.418 10:31:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.418 10:31:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.418 10:31:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:42.418 10:31:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:42.418 10:31:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.328 10:31:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:44.328 10:31:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:44.328 10:31:10 -- setup/devices.sh@68 -- # return 0 00:05:44.328 10:31:10 -- setup/devices.sh@187 -- # cleanup_dm 00:05:44.328 10:31:10 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:44.328 10:31:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:44.328 10:31:10 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:44.328 10:31:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:44.328 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:44.328 10:31:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:44.328 00:05:44.328 real 0m6.482s 00:05:44.328 user 0m0.496s 00:05:44.328 sys 0m2.901s 00:05:44.328 10:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.328 ************************************ 00:05:44.328 END TEST dm_mount 00:05:44.328 ************************************ 00:05:44.328 10:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.328 10:31:10 -- setup/devices.sh@1 -- # cleanup 00:05:44.328 10:31:10 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:44.328 10:31:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.328 10:31:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:44.328 10:31:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:44.328 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:44.328 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:44.328 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:44.328 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:44.328 10:31:10 -- setup/devices.sh@12 -- # cleanup_dm 00:05:44.328 10:31:10 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:44.328 10:31:10 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:44.328 10:31:10 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:44.328 10:31:10 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:44.328 ************************************ 00:05:44.328 END TEST devices 00:05:44.328 ************************************ 00:05:44.328 00:05:44.328 real 0m14.603s 00:05:44.328 user 0m1.616s 00:05:44.328 sys 0m7.749s 00:05:44.328 10:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.328 10:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.328 ************************************ 00:05:44.328 END TEST setup.sh 00:05:44.328 ************************************ 00:05:44.328 00:05:44.328 real 0m30.100s 00:05:44.328 user 0m6.460s 00:05:44.328 sys 0m18.375s 00:05:44.328 10:31:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.328 10:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:44.328 10:31:10 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:44.328 Hugepages 00:05:44.328 node hugesize free / total 00:05:44.328 node0 1048576kB 0 / 0 00:05:44.328 node0 2048kB 2048 / 2048 00:05:44.328 00:05:44.328 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:44.328 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:44.586 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:44.586 10:31:11 -- spdk/autotest.sh@141 -- # uname -s 00:05:44.586 10:31:11 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:44.586 10:31:11 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:44.586 10:31:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:44.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:45.103 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.036 10:31:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:47.409 10:31:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:47.409 10:31:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:47.409 10:31:13 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:47.409 10:31:13 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:47.409 10:31:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:47.409 10:31:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:47.409 10:31:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:47.409 10:31:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:47.409 10:31:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:47.409 10:31:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:47.409 10:31:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:47.409 10:31:13 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:47.409 Waiting for block devices as requested 00:05:47.667 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:47.667 10:31:14 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:47.667 10:31:14 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:47.667 10:31:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:47.667 10:31:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:47.667 10:31:14 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:47.667 10:31:14 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:47.667 10:31:14 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:47.667 10:31:14 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:47.667 10:31:14 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:47.667 10:31:14 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:47.667 10:31:14 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:47.667 10:31:14 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:47.667 10:31:14 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:47.667 10:31:14 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:47.667 10:31:14 -- common/autotest_common.sh@1542 -- # continue 00:05:47.667 10:31:14 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:47.667 10:31:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.667 10:31:14 -- common/autotest_common.sh@10 -- # set +x 00:05:47.667 10:31:14 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:47.667 10:31:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.667 10:31:14 -- common/autotest_common.sh@10 -- # set +x 00:05:47.667 10:31:14 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:48.234 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:48.234 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:49.608 10:31:16 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:49.608 10:31:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.608 10:31:16 -- common/autotest_common.sh@10 -- # set +x 00:05:49.608 10:31:16 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:49.608 10:31:16 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:49.608 10:31:16 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:49.608 10:31:16 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:49.608 10:31:16 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:49.608 10:31:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:49.608 10:31:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:49.608 10:31:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:49.608 10:31:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:49.608 10:31:16 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.608 10:31:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:49.868 10:31:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:49.868 10:31:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:49.868 10:31:16 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:49.868 10:31:16 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:49.868 10:31:16 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:49.868 10:31:16 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:49.868 10:31:16 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:49.868 10:31:16 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:49.868 10:31:16 -- common/autotest_common.sh@1578 -- # return 0 00:05:49.868 10:31:16 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:49.868 10:31:16 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:49.868 10:31:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.868 10:31:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.868 10:31:16 -- common/autotest_common.sh@10 -- # set +x 00:05:49.868 ************************************ 00:05:49.868 START TEST unittest 00:05:49.868 ************************************ 00:05:49.868 10:31:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:49.868 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:49.868 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:49.868 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:49.868 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:49.868 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:49.868 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:49.868 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:49.868 ++ rpc_py=rpc_cmd 00:05:49.868 ++ set -e 00:05:49.868 ++ shopt -s nullglob 00:05:49.868 ++ shopt -s extglob 00:05:49.868 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:49.868 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:49.868 +++ CONFIG_WPDK_DIR= 00:05:49.868 +++ CONFIG_ASAN=y 00:05:49.868 +++ CONFIG_VBDEV_COMPRESS=n 00:05:49.868 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:49.868 +++ CONFIG_USDT=n 00:05:49.868 +++ CONFIG_CUSTOMOCF=n 00:05:49.868 +++ CONFIG_PREFIX=/usr/local 00:05:49.868 +++ CONFIG_RBD=n 00:05:49.868 +++ CONFIG_LIBDIR= 00:05:49.868 +++ CONFIG_IDXD=y 00:05:49.868 +++ CONFIG_NVME_CUSE=y 00:05:49.868 +++ CONFIG_SMA=n 00:05:49.868 +++ CONFIG_VTUNE=n 00:05:49.868 +++ CONFIG_TSAN=n 00:05:49.868 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:49.868 +++ CONFIG_VFIO_USER_DIR= 00:05:49.868 +++ CONFIG_PGO_CAPTURE=n 00:05:49.868 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:49.868 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:49.868 +++ CONFIG_LTO=n 00:05:49.868 +++ CONFIG_ISCSI_INITIATOR=y 00:05:49.868 +++ CONFIG_CET=n 00:05:49.868 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:49.868 +++ CONFIG_OCF_PATH= 00:05:49.868 +++ CONFIG_RDMA_SET_TOS=y 00:05:49.868 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:49.868 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:49.868 +++ CONFIG_UBLK=n 00:05:49.868 +++ CONFIG_ISAL_CRYPTO=y 00:05:49.868 +++ CONFIG_OPENSSL_PATH= 00:05:49.868 +++ CONFIG_OCF=n 00:05:49.868 +++ CONFIG_FUSE=n 00:05:49.868 +++ CONFIG_VTUNE_DIR= 00:05:49.868 +++ CONFIG_FUZZER_LIB= 00:05:49.868 +++ CONFIG_FUZZER=n 00:05:49.868 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:05:49.868 +++ CONFIG_CRYPTO=n 00:05:49.868 +++ CONFIG_PGO_USE=n 00:05:49.868 +++ CONFIG_VHOST=y 00:05:49.868 +++ CONFIG_DAOS=n 00:05:49.868 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:05:49.868 +++ CONFIG_DAOS_DIR= 00:05:49.868 +++ CONFIG_UNIT_TESTS=y 00:05:49.868 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:49.868 +++ CONFIG_VIRTIO=y 00:05:49.868 +++ CONFIG_COVERAGE=y 00:05:49.868 +++ CONFIG_RDMA=y 00:05:49.868 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:49.868 +++ CONFIG_URING_PATH= 00:05:49.868 +++ CONFIG_XNVME=n 00:05:49.868 +++ CONFIG_VFIO_USER=n 00:05:49.868 +++ CONFIG_ARCH=native 00:05:49.868 +++ CONFIG_URING_ZNS=n 00:05:49.868 +++ CONFIG_WERROR=y 00:05:49.868 +++ CONFIG_HAVE_LIBBSD=n 00:05:49.868 +++ CONFIG_UBSAN=y 00:05:49.868 +++ CONFIG_IPSEC_MB_DIR= 00:05:49.868 +++ CONFIG_GOLANG=n 00:05:49.868 +++ CONFIG_ISAL=y 00:05:49.868 +++ CONFIG_IDXD_KERNEL=n 00:05:49.868 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:49.868 +++ CONFIG_RDMA_PROV=verbs 00:05:49.868 +++ CONFIG_APPS=y 00:05:49.868 +++ CONFIG_SHARED=n 00:05:49.868 +++ CONFIG_FC_PATH= 00:05:49.868 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:49.868 +++ CONFIG_FC=n 00:05:49.868 +++ CONFIG_AVAHI=n 00:05:49.868 +++ CONFIG_FIO_PLUGIN=y 00:05:49.868 +++ CONFIG_RAID5F=y 00:05:49.868 +++ CONFIG_EXAMPLES=y 00:05:49.868 +++ CONFIG_TESTS=y 00:05:49.868 +++ CONFIG_CRYPTO_MLX5=n 00:05:49.868 +++ CONFIG_MAX_LCORES= 00:05:49.868 +++ CONFIG_IPSEC_MB=n 00:05:49.868 +++ CONFIG_DEBUG=y 00:05:49.868 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:49.868 +++ CONFIG_CROSS_PREFIX= 00:05:49.868 +++ CONFIG_URING=n 00:05:49.868 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:49.868 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:49.868 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:49.868 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:49.868 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:49.868 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:49.868 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:49.868 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:49.868 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:49.868 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:49.868 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:49.868 +++ VHOST_APP=("$_app_dir/vhost") 00:05:49.868 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:49.868 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:49.868 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:49.868 +++ [[ #ifndef SPDK_CONFIG_H 00:05:49.868 #define SPDK_CONFIG_H 00:05:49.868 #define SPDK_CONFIG_APPS 1 00:05:49.868 #define SPDK_CONFIG_ARCH native 00:05:49.868 #define SPDK_CONFIG_ASAN 1 00:05:49.868 #undef SPDK_CONFIG_AVAHI 00:05:49.868 #undef SPDK_CONFIG_CET 00:05:49.868 #define SPDK_CONFIG_COVERAGE 1 00:05:49.868 #define SPDK_CONFIG_CROSS_PREFIX 00:05:49.868 #undef SPDK_CONFIG_CRYPTO 00:05:49.868 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:49.868 #undef SPDK_CONFIG_CUSTOMOCF 00:05:49.868 #undef SPDK_CONFIG_DAOS 00:05:49.868 #define SPDK_CONFIG_DAOS_DIR 00:05:49.868 #define SPDK_CONFIG_DEBUG 1 00:05:49.868 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:49.868 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:05:49.868 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:05:49.868 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:05:49.868 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:49.868 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:49.868 #define SPDK_CONFIG_EXAMPLES 1 00:05:49.868 #undef SPDK_CONFIG_FC 00:05:49.868 #define SPDK_CONFIG_FC_PATH 00:05:49.868 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:49.868 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:49.868 #undef SPDK_CONFIG_FUSE 00:05:49.868 #undef SPDK_CONFIG_FUZZER 00:05:49.868 #define SPDK_CONFIG_FUZZER_LIB 00:05:49.868 #undef SPDK_CONFIG_GOLANG 00:05:49.868 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:49.868 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:49.868 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:49.868 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:49.868 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:49.868 #define SPDK_CONFIG_IDXD 1 00:05:49.868 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:49.868 #undef SPDK_CONFIG_IPSEC_MB 00:05:49.868 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:49.868 #define SPDK_CONFIG_ISAL 1 00:05:49.868 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:49.869 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:49.869 #define SPDK_CONFIG_LIBDIR 00:05:49.869 #undef SPDK_CONFIG_LTO 00:05:49.869 #define SPDK_CONFIG_MAX_LCORES 00:05:49.869 #define SPDK_CONFIG_NVME_CUSE 1 00:05:49.869 #undef SPDK_CONFIG_OCF 00:05:49.869 #define SPDK_CONFIG_OCF_PATH 00:05:49.869 #define SPDK_CONFIG_OPENSSL_PATH 00:05:49.869 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:49.869 #undef SPDK_CONFIG_PGO_USE 00:05:49.869 #define SPDK_CONFIG_PREFIX /usr/local 00:05:49.869 #define SPDK_CONFIG_RAID5F 1 00:05:49.869 #undef SPDK_CONFIG_RBD 00:05:49.869 #define SPDK_CONFIG_RDMA 1 00:05:49.869 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:49.869 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:49.869 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:49.869 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:49.869 #undef SPDK_CONFIG_SHARED 00:05:49.869 #undef SPDK_CONFIG_SMA 00:05:49.869 #define SPDK_CONFIG_TESTS 1 00:05:49.869 #undef SPDK_CONFIG_TSAN 00:05:49.869 #undef SPDK_CONFIG_UBLK 00:05:49.869 #define SPDK_CONFIG_UBSAN 1 00:05:49.869 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:49.869 #undef SPDK_CONFIG_URING 00:05:49.869 #define SPDK_CONFIG_URING_PATH 00:05:49.869 #undef SPDK_CONFIG_URING_ZNS 00:05:49.869 #undef SPDK_CONFIG_USDT 00:05:49.869 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:49.869 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:49.869 #undef SPDK_CONFIG_VFIO_USER 00:05:49.869 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:49.869 #define SPDK_CONFIG_VHOST 1 00:05:49.869 #define SPDK_CONFIG_VIRTIO 1 00:05:49.869 #undef SPDK_CONFIG_VTUNE 00:05:49.869 #define SPDK_CONFIG_VTUNE_DIR 00:05:49.869 #define SPDK_CONFIG_WERROR 1 00:05:49.869 #define SPDK_CONFIG_WPDK_DIR 00:05:49.869 #undef SPDK_CONFIG_XNVME 00:05:49.869 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:49.869 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:49.869 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:49.869 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:49.869 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.869 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.869 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:49.869 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:49.869 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:49.869 ++++ export PATH 00:05:49.869 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:49.869 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:49.869 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:49.869 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:49.869 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:49.869 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:49.869 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:49.869 +++ TEST_TAG=N/A 00:05:49.869 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:49.869 ++ : 1 00:05:49.869 ++ export RUN_NIGHTLY 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_RUN_VALGRIND 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_TEST_UNITTEST 00:05:49.869 ++ : 00:05:49.869 ++ export SPDK_TEST_AUTOBUILD 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_RELEASE_BUILD 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_ISAL 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_ISCSI 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_TEST_NVME 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVME_PMR 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVME_BP 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVME_CLI 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVME_CUSE 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVME_FDP 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVMF 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_VFIOUSER 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_FUZZER 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_FUZZER_SHORT 00:05:49.869 ++ : rdma 00:05:49.869 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_RBD 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_VHOST 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_TEST_BLOCKDEV 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_IOAT 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_BLOBFS 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_VHOST_INIT 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_LVOL 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_RUN_ASAN 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_RUN_UBSAN 00:05:49.869 ++ : /home/vagrant/spdk_repo/dpdk/build 00:05:49.869 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_RUN_NON_ROOT 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_CRYPTO 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_FTL 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_OCF 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_VMD 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_OPAL 00:05:49.869 ++ : v22.11.4 00:05:49.869 ++ export SPDK_TEST_NATIVE_DPDK 00:05:49.869 ++ : true 00:05:49.869 ++ export SPDK_AUTOTEST_X 00:05:49.869 ++ : 1 00:05:49.869 ++ export SPDK_TEST_RAID5 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_URING 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_USDT 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_USE_IGB_UIO 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_SCHEDULER 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_SCANBUILD 00:05:49.869 ++ : 00:05:49.869 ++ export SPDK_TEST_NVMF_NICS 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_SMA 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_DAOS 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_XNVME 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_ACCEL_DSA 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_ACCEL_IAA 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_ACCEL_IOAT 00:05:49.869 ++ : 00:05:49.869 ++ export SPDK_TEST_FUZZER_TARGET 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_TEST_NVMF_MDNS 00:05:49.869 ++ : 0 00:05:49.869 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:49.869 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:49.869 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:49.869 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:49.869 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:49.869 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:49.869 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:49.869 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:49.869 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:49.869 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:49.869 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:49.869 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:49.869 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:49.869 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:49.869 ++ PYTHONDONTWRITEBYTECODE=1 00:05:49.869 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:49.869 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:49.869 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:49.869 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:49.869 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:49.869 ++ rm -rf /var/tmp/asan_suppression_file 00:05:49.869 ++ cat 00:05:49.869 ++ echo leak:libfuse3.so 00:05:49.869 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:49.869 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:49.870 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:49.870 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:49.870 ++ '[' -z /var/spdk/dependencies ']' 00:05:49.870 ++ export DEPENDENCY_DIR 00:05:49.870 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:49.870 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:49.870 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:49.870 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:49.870 ++ export QEMU_BIN= 00:05:49.870 ++ QEMU_BIN= 00:05:49.870 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:49.870 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:49.870 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:49.870 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:49.870 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.870 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:49.870 ++ '[' 0 -eq 0 ']' 00:05:49.870 ++ export valgrind= 00:05:49.870 ++ valgrind= 00:05:49.870 +++ uname -s 00:05:49.870 ++ '[' Linux = Linux ']' 00:05:49.870 ++ HUGEMEM=4096 00:05:49.870 ++ export CLEAR_HUGE=yes 00:05:49.870 ++ CLEAR_HUGE=yes 00:05:49.870 ++ [[ 0 -eq 1 ]] 00:05:49.870 ++ [[ 0 -eq 1 ]] 00:05:49.870 ++ MAKE=make 00:05:49.870 +++ nproc 00:05:49.870 ++ MAKEFLAGS=-j10 00:05:49.870 ++ export HUGEMEM=4096 00:05:49.870 ++ HUGEMEM=4096 00:05:49.870 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:49.870 ++ NO_HUGE=() 00:05:49.870 ++ TEST_MODE= 00:05:49.870 ++ [[ -z '' ]] 00:05:49.870 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:49.870 ++ exec 00:05:49.870 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:49.870 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:49.870 ++ set_test_storage 2147483648 00:05:49.870 ++ [[ -v testdir ]] 00:05:49.870 ++ local requested_size=2147483648 00:05:49.870 ++ local mount target_dir 00:05:49.870 ++ local -A mounts fss sizes avails uses 00:05:49.870 ++ local source fs size avail mount use 00:05:49.870 ++ local storage_fallback storage_candidates 00:05:49.870 +++ mktemp -udt spdk.XXXXXX 00:05:49.870 ++ storage_fallback=/tmp/spdk.B7h2KG 00:05:49.870 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:49.870 ++ [[ -n '' ]] 00:05:49.870 ++ [[ -n '' ]] 00:05:49.870 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.B7h2KG/tests/unit /tmp/spdk.B7h2KG 00:05:49.870 ++ requested_size=2214592512 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 +++ df -T 00:05:49.870 +++ grep -v Filesystem 00:05:49.870 ++ mounts["$mount"]=tmpfs 00:05:49.870 ++ fss["$mount"]=tmpfs 00:05:49.870 ++ avails["$mount"]=1252601856 00:05:49.870 ++ sizes["$mount"]=1253683200 00:05:49.870 ++ uses["$mount"]=1081344 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ mounts["$mount"]=/dev/vda1 00:05:49.870 ++ fss["$mount"]=ext4 00:05:49.870 ++ avails["$mount"]=9654243328 00:05:49.870 ++ sizes["$mount"]=20616794112 00:05:49.870 ++ uses["$mount"]=10945773568 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ mounts["$mount"]=tmpfs 00:05:49.870 ++ fss["$mount"]=tmpfs 00:05:49.870 ++ avails["$mount"]=6268403712 00:05:49.870 ++ sizes["$mount"]=6268403712 00:05:49.870 ++ uses["$mount"]=0 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ mounts["$mount"]=tmpfs 00:05:49.870 ++ fss["$mount"]=tmpfs 00:05:49.870 ++ avails["$mount"]=5242880 00:05:49.870 ++ sizes["$mount"]=5242880 00:05:49.870 ++ uses["$mount"]=0 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ mounts["$mount"]=/dev/vda15 00:05:49.870 ++ fss["$mount"]=vfat 00:05:49.870 ++ avails["$mount"]=103061504 00:05:49.870 ++ sizes["$mount"]=109395968 00:05:49.870 ++ uses["$mount"]=6334464 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ mounts["$mount"]=tmpfs 00:05:49.870 ++ fss["$mount"]=tmpfs 00:05:49.870 ++ avails["$mount"]=1253675008 00:05:49.870 ++ sizes["$mount"]=1253679104 00:05:49.870 ++ uses["$mount"]=4096 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:05:49.870 ++ fss["$mount"]=fuse.sshfs 00:05:49.870 ++ avails["$mount"]=92410748928 00:05:49.870 ++ sizes["$mount"]=105088212992 00:05:49.870 ++ uses["$mount"]=7292030976 00:05:49.870 ++ read -r source fs size use avail _ mount 00:05:49.870 ++ printf '* Looking for test storage...\n' 00:05:49.870 * Looking for test storage... 00:05:49.870 ++ local target_space new_size 00:05:49.870 ++ for target_dir in "${storage_candidates[@]}" 00:05:49.870 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:49.870 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:49.870 ++ mount=/ 00:05:49.870 ++ target_space=9654243328 00:05:49.870 ++ (( target_space == 0 || target_space < requested_size )) 00:05:49.870 ++ (( target_space >= requested_size )) 00:05:49.870 ++ [[ ext4 == tmpfs ]] 00:05:49.870 ++ [[ ext4 == ramfs ]] 00:05:49.870 ++ [[ / == / ]] 00:05:49.870 ++ new_size=13160366080 00:05:49.870 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:49.870 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:49.870 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:49.870 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:49.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:49.870 ++ return 0 00:05:49.870 ++ set -o errtrace 00:05:49.870 ++ shopt -s extdebug 00:05:49.870 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:49.870 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:49.870 10:31:16 -- common/autotest_common.sh@1672 -- # true 00:05:49.870 10:31:16 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:49.870 10:31:16 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:49.870 10:31:16 -- common/autotest_common.sh@29 -- # exec 00:05:49.870 10:31:16 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:49.870 10:31:16 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:49.870 10:31:16 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:49.870 10:31:16 -- common/autotest_common.sh@18 -- # set -x 00:05:49.870 10:31:16 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:49.870 10:31:16 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:49.870 10:31:16 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:49.870 10:31:16 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:49.870 10:31:16 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:49.870 10:31:16 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:49.870 10:31:16 -- unit/unittest.sh@179 -- # hash lcov 00:05:49.870 10:31:16 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:49.870 10:31:16 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:49.870 10:31:16 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:49.870 10:31:16 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:49.870 10:31:16 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:49.870 10:31:16 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:49.870 10:31:16 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:49.870 10:31:16 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:49.870 --rc lcov_branch_coverage=1 00:05:49.870 --rc lcov_function_coverage=1 00:05:49.870 --rc genhtml_branch_coverage=1 00:05:49.870 --rc genhtml_function_coverage=1 00:05:49.870 --rc genhtml_legend=1 00:05:49.870 --rc geninfo_all_blocks=1 00:05:49.870 ' 00:05:49.870 10:31:16 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:49.870 --rc lcov_branch_coverage=1 00:05:49.870 --rc lcov_function_coverage=1 00:05:49.870 --rc genhtml_branch_coverage=1 00:05:49.870 --rc genhtml_function_coverage=1 00:05:49.870 --rc genhtml_legend=1 00:05:49.870 --rc geninfo_all_blocks=1 00:05:49.870 ' 00:05:49.870 10:31:16 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:49.870 --rc lcov_branch_coverage=1 00:05:49.870 --rc lcov_function_coverage=1 00:05:49.870 --rc genhtml_branch_coverage=1 00:05:49.870 --rc genhtml_function_coverage=1 00:05:49.870 --rc genhtml_legend=1 00:05:49.870 --rc geninfo_all_blocks=1 00:05:49.870 --no-external' 00:05:49.870 10:31:16 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:49.870 --rc lcov_branch_coverage=1 00:05:49.870 --rc lcov_function_coverage=1 00:05:49.870 --rc genhtml_branch_coverage=1 00:05:49.870 --rc genhtml_function_coverage=1 00:05:49.870 --rc genhtml_legend=1 00:05:49.870 --rc geninfo_all_blocks=1 00:05:49.870 --no-external' 00:05:49.870 10:31:16 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:06:08.008 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:06:08.008 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:06:08.008 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:06:08.008 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:06:08.008 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:06:08.008 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:40.073 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:40.073 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:40.074 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:40.074 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:40.074 10:32:05 -- unit/unittest.sh@206 -- # uname -m 00:06:40.074 10:32:05 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:06:40.074 10:32:05 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:40.074 10:32:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.074 10:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.074 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.074 ************************************ 00:06:40.074 START TEST unittest_pci_event 00:06:40.074 ************************************ 00:06:40.074 10:32:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:40.074 00:06:40.074 00:06:40.074 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.074 http://cunit.sourceforge.net/ 00:06:40.074 00:06:40.074 00:06:40.074 Suite: pci_event 00:06:40.074 Test: test_pci_parse_event ...[2024-07-24 10:32:05.620211] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:40.074 passed 00:06:40.074 00:06:40.074 [2024-07-24 10:32:05.621334] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:40.074 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.074 suites 1 1 n/a 0 0 00:06:40.074 tests 1 1 1 0 0 00:06:40.074 asserts 15 15 15 0 n/a 00:06:40.074 00:06:40.074 Elapsed time = 0.001 seconds 00:06:40.075 00:06:40.075 real 0m0.045s 00:06:40.075 user 0m0.013s 00:06:40.075 sys 0m0.024s 00:06:40.075 10:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.075 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.075 ************************************ 00:06:40.075 END TEST unittest_pci_event 00:06:40.075 ************************************ 00:06:40.075 10:32:05 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:40.075 10:32:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.075 10:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.075 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.075 ************************************ 00:06:40.075 START TEST unittest_include 00:06:40.075 ************************************ 00:06:40.075 10:32:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:40.075 00:06:40.075 00:06:40.075 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.075 http://cunit.sourceforge.net/ 00:06:40.075 00:06:40.075 00:06:40.075 Suite: histogram 00:06:40.075 Test: histogram_test ...passed 00:06:40.075 Test: histogram_merge ...passed 00:06:40.075 00:06:40.075 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.075 suites 1 1 n/a 0 0 00:06:40.075 tests 2 2 2 0 0 00:06:40.075 asserts 50 50 50 0 n/a 00:06:40.075 00:06:40.075 Elapsed time = 0.006 seconds 00:06:40.075 00:06:40.075 real 0m0.030s 00:06:40.075 user 0m0.012s 00:06:40.075 sys 0m0.019s 00:06:40.075 10:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.075 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.075 ************************************ 00:06:40.075 END TEST unittest_include 00:06:40.075 ************************************ 00:06:40.075 10:32:05 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:06:40.075 10:32:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.075 10:32:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.075 10:32:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.075 ************************************ 00:06:40.075 START TEST unittest_bdev 00:06:40.075 ************************************ 00:06:40.075 10:32:05 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:06:40.075 10:32:05 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:40.075 00:06:40.075 00:06:40.075 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.075 http://cunit.sourceforge.net/ 00:06:40.075 00:06:40.075 00:06:40.075 Suite: bdev 00:06:40.075 Test: bytes_to_blocks_test ...passed 00:06:40.075 Test: num_blocks_test ...passed 00:06:40.075 Test: io_valid_test ...passed 00:06:40.075 Test: open_write_test ...[2024-07-24 10:32:05.872359] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:40.075 [2024-07-24 10:32:05.873213] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:40.075 [2024-07-24 10:32:05.873503] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:40.075 passed 00:06:40.075 Test: claim_test ...passed 00:06:40.075 Test: alias_add_del_test ...[2024-07-24 10:32:05.957875] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:40.075 [2024-07-24 10:32:05.958268] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:40.075 [2024-07-24 10:32:05.958453] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:40.075 passed 00:06:40.075 Test: get_device_stat_test ...passed 00:06:40.075 Test: bdev_io_types_test ...passed 00:06:40.075 Test: bdev_io_wait_test ...passed 00:06:40.075 Test: bdev_io_spans_split_test ...passed 00:06:40.075 Test: bdev_io_boundary_split_test ...passed 00:06:40.075 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-24 10:32:06.131750] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:40.075 passed 00:06:40.075 Test: bdev_io_mix_split_test ...passed 00:06:40.075 Test: bdev_io_split_with_io_wait ...passed 00:06:40.075 Test: bdev_io_write_unit_split_test ...[2024-07-24 10:32:06.265823] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:40.075 [2024-07-24 10:32:06.266226] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:40.075 [2024-07-24 10:32:06.266404] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:40.075 [2024-07-24 10:32:06.266565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:40.075 passed 00:06:40.075 Test: bdev_io_alignment_with_boundary ...passed 00:06:40.075 Test: bdev_io_alignment ...passed 00:06:40.075 Test: bdev_histograms ...passed 00:06:40.075 Test: bdev_write_zeroes ...passed 00:06:40.075 Test: bdev_compare_and_write ...passed 00:06:40.075 Test: bdev_compare ...passed 00:06:40.075 Test: bdev_compare_emulated ...passed 00:06:40.075 Test: bdev_zcopy_write ...passed 00:06:40.334 Test: bdev_zcopy_read ...passed 00:06:40.334 Test: bdev_open_while_hotremove ...passed 00:06:40.334 Test: bdev_close_while_hotremove ...passed 00:06:40.334 Test: bdev_open_ext_test ...[2024-07-24 10:32:06.755809] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:40.334 passed 00:06:40.334 Test: bdev_open_ext_unregister ...[2024-07-24 10:32:06.756275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:40.334 passed 00:06:40.334 Test: bdev_set_io_timeout ...passed 00:06:40.334 Test: bdev_set_qd_sampling ...passed 00:06:40.334 Test: lba_range_overlap ...passed 00:06:40.334 Test: lock_lba_range_check_ranges ...passed 00:06:40.334 Test: lock_lba_range_with_io_outstanding ...passed 00:06:40.334 Test: lock_lba_range_overlapped ...passed 00:06:40.334 Test: bdev_quiesce ...[2024-07-24 10:32:06.995060] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:40.592 passed 00:06:40.592 Test: bdev_io_abort ...passed 00:06:40.592 Test: bdev_unmap ...passed 00:06:40.592 Test: bdev_write_zeroes_split_test ...passed 00:06:40.592 Test: bdev_set_options_test ...[2024-07-24 10:32:07.147328] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:40.592 passed 00:06:40.592 Test: bdev_get_memory_domains ...passed 00:06:40.592 Test: bdev_io_ext ...passed 00:06:40.592 Test: bdev_io_ext_no_opts ...passed 00:06:40.850 Test: bdev_io_ext_invalid_opts ...passed 00:06:40.850 Test: bdev_io_ext_split ...passed 00:06:40.850 Test: bdev_io_ext_bounce_buffer ...passed 00:06:40.850 Test: bdev_register_uuid_alias ...[2024-07-24 10:32:07.361962] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 9cd2dc0c-0e5c-494e-9118-8df0d9627dcf already exists 00:06:40.850 [2024-07-24 10:32:07.362263] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:9cd2dc0c-0e5c-494e-9118-8df0d9627dcf alias for bdev bdev0 00:06:40.850 passed 00:06:40.850 Test: bdev_unregister_by_name ...[2024-07-24 10:32:07.382938] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:40.850 passed 00:06:40.850 Test: for_each_bdev_test ...[2024-07-24 10:32:07.383196] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:40.850 passed 00:06:40.850 Test: bdev_seek_test ...passed 00:06:40.850 Test: bdev_copy ...passed 00:06:40.850 Test: bdev_copy_split_test ...passed 00:06:40.850 Test: examine_locks ...passed 00:06:40.850 Test: claim_v2_rwo ...[2024-07-24 10:32:07.511155] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:40.850 [2024-07-24 10:32:07.511565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:40.850 [2024-07-24 10:32:07.511769] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:40.850 [2024-07-24 10:32:07.511968] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.512103] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.512263] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:40.851 passed 00:06:40.851 Test: claim_v2_rom ...[2024-07-24 10:32:07.512598] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.512776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.512920] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.513048] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.513227] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:40.851 [2024-07-24 10:32:07.513408] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:40.851 passed 00:06:40.851 Test: claim_v2_rwm ...[2024-07-24 10:32:07.513663] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:40.851 [2024-07-24 10:32:07.513844] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.513984] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.514126] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.514261] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.514408] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.514561] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:40.851 passed 00:06:40.851 Test: claim_v2_existing_writer ...[2024-07-24 10:32:07.514850] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:40.851 [2024-07-24 10:32:07.514978] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:40.851 passed 00:06:40.851 Test: claim_v2_existing_v1 ...[2024-07-24 10:32:07.515205] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.515381] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.515519] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:40.851 passed 00:06:40.851 Test: claim_v1_existing_v2 ...[2024-07-24 10:32:07.515784] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.515969] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:40.851 [2024-07-24 10:32:07.516113] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:40.851 passed 00:06:40.851 Test: examine_claimed ...[2024-07-24 10:32:07.516501] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:40.851 passed 00:06:40.851 00:06:40.851 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.851 suites 1 1 n/a 0 0 00:06:40.851 tests 59 59 59 0 0 00:06:40.851 asserts 4599 4599 4599 0 n/a 00:06:40.851 00:06:40.851 Elapsed time = 1.704 seconds 00:06:41.116 10:32:07 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:41.116 00:06:41.116 00:06:41.116 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.116 http://cunit.sourceforge.net/ 00:06:41.116 00:06:41.116 00:06:41.116 Suite: nvme 00:06:41.116 Test: test_create_ctrlr ...passed 00:06:41.116 Test: test_reset_ctrlr ...[2024-07-24 10:32:07.565371] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 passed 00:06:41.116 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:41.116 Test: test_failover_ctrlr ...passed 00:06:41.116 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-24 10:32:07.568205] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 [2024-07-24 10:32:07.568467] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 [2024-07-24 10:32:07.568731] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 passed 00:06:41.116 Test: test_pending_reset ...[2024-07-24 10:32:07.570356] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 [2024-07-24 10:32:07.570642] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 passed 00:06:41.116 Test: test_attach_ctrlr ...[2024-07-24 10:32:07.571919] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:41.116 passed 00:06:41.116 Test: test_aer_cb ...passed 00:06:41.116 Test: test_submit_nvme_cmd ...passed 00:06:41.116 Test: test_add_remove_trid ...passed 00:06:41.116 Test: test_abort ...[2024-07-24 10:32:07.575792] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:41.116 passed 00:06:41.116 Test: test_get_io_qpair ...passed 00:06:41.116 Test: test_bdev_unregister ...passed 00:06:41.116 Test: test_compare_ns ...passed 00:06:41.116 Test: test_init_ana_log_page ...passed 00:06:41.116 Test: test_get_memory_domains ...passed 00:06:41.116 Test: test_reconnect_qpair ...[2024-07-24 10:32:07.578838] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.116 passed 00:06:41.116 Test: test_create_bdev_ctrlr ...[2024-07-24 10:32:07.579487] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:41.116 passed 00:06:41.117 Test: test_add_multi_ns_to_bdev ...[2024-07-24 10:32:07.581072] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:41.117 passed 00:06:41.117 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:41.117 Test: test_admin_path ...passed 00:06:41.117 Test: test_reset_bdev_ctrlr ...passed 00:06:41.117 Test: test_find_io_path ...passed 00:06:41.117 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:41.117 Test: test_retry_io_for_io_path_error ...passed 00:06:41.117 Test: test_retry_io_count ...passed 00:06:41.117 Test: test_concurrent_read_ana_log_page ...passed 00:06:41.117 Test: test_retry_io_for_ana_error ...passed 00:06:41.117 Test: test_check_io_error_resiliency_params ...[2024-07-24 10:32:07.589472] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:41.117 [2024-07-24 10:32:07.589566] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:41.117 [2024-07-24 10:32:07.589597] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:41.117 [2024-07-24 10:32:07.589636] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:41.117 [2024-07-24 10:32:07.589677] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:41.117 [2024-07-24 10:32:07.589741] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:41.117 [2024-07-24 10:32:07.589787] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:41.117 [2024-07-24 10:32:07.589848] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:41.117 [2024-07-24 10:32:07.589904] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:41.117 passed 00:06:41.117 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:41.117 Test: test_reconnect_ctrlr ...[2024-07-24 10:32:07.590826] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.591037] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.591426] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.591713] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.591907] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 passed 00:06:41.117 Test: test_retry_failover_ctrlr ...[2024-07-24 10:32:07.592427] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 passed 00:06:41.117 Test: test_fail_path ...[2024-07-24 10:32:07.593138] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.593313] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.593454] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.593586] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.593796] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 passed 00:06:41.117 Test: test_nvme_ns_cmp ...passed 00:06:41.117 Test: test_ana_transition ...passed 00:06:41.117 Test: test_set_preferred_path ...passed 00:06:41.117 Test: test_find_next_io_path ...passed 00:06:41.117 Test: test_find_io_path_min_qd ...passed 00:06:41.117 Test: test_disable_auto_failback ...[2024-07-24 10:32:07.595872] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 passed 00:06:41.117 Test: test_set_multipath_policy ...passed 00:06:41.117 Test: test_uuid_generation ...passed 00:06:41.117 Test: test_retry_io_to_same_path ...passed 00:06:41.117 Test: test_race_between_reset_and_disconnected ...passed 00:06:41.117 Test: test_ctrlr_op_rpc ...passed 00:06:41.117 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:41.117 Test: test_disable_enable_ctrlr ...[2024-07-24 10:32:07.600051] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 [2024-07-24 10:32:07.600249] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:41.117 passed 00:06:41.117 Test: test_delete_ctrlr_done ...passed 00:06:41.117 Test: test_ns_remove_during_reset ...passed 00:06:41.117 00:06:41.117 Run Summary: Type Total Ran Passed Failed Inactive 00:06:41.117 suites 1 1 n/a 0 0 00:06:41.117 tests 48 48 48 0 0 00:06:41.117 asserts 3553 3553 3553 0 n/a 00:06:41.117 00:06:41.117 Elapsed time = 0.037 seconds 00:06:41.117 10:32:07 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:41.117 Test Options 00:06:41.117 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:41.117 00:06:41.117 00:06:41.117 CUnit - A unit testing framework for C - Version 2.1-3 00:06:41.117 http://cunit.sourceforge.net/ 00:06:41.117 00:06:41.117 00:06:41.117 Suite: raid 00:06:41.117 Test: test_create_raid ...passed 00:06:41.117 Test: test_create_raid_superblock ...passed 00:06:41.117 Test: test_delete_raid ...passed 00:06:41.117 Test: test_create_raid_invalid_args ...[2024-07-24 10:32:07.649737] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:41.117 [2024-07-24 10:32:07.650259] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:41.117 [2024-07-24 10:32:07.650819] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:41.117 [2024-07-24 10:32:07.651126] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:41.117 [2024-07-24 10:32:07.652110] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:41.117 passed 00:06:41.117 Test: test_delete_raid_invalid_args ...passed 00:06:41.117 Test: test_io_channel ...passed 00:06:41.117 Test: test_reset_io ...passed 00:06:41.117 Test: test_write_io ...passed 00:06:41.117 Test: test_read_io ...passed 00:06:42.500 Test: test_unmap_io ...passed 00:06:42.500 Test: test_io_failure ...[2024-07-24 10:32:08.804486] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:06:42.500 passed 00:06:42.500 Test: test_multi_raid_no_io ...passed 00:06:42.500 Test: test_multi_raid_with_io ...passed 00:06:42.500 Test: test_io_type_supported ...passed 00:06:42.500 Test: test_raid_json_dump_info ...passed 00:06:42.500 Test: test_context_size ...passed 00:06:42.500 Test: test_raid_level_conversions ...passed 00:06:42.500 Test: test_raid_process ...passed 00:06:42.500 Test: test_raid_io_split ...passed 00:06:42.500 00:06:42.500 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.500 suites 1 1 n/a 0 0 00:06:42.500 tests 19 19 19 0 0 00:06:42.500 asserts 177879 177879 177879 0 n/a 00:06:42.500 00:06:42.500 Elapsed time = 1.164 seconds 00:06:42.500 10:32:08 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:42.500 00:06:42.500 00:06:42.500 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.500 http://cunit.sourceforge.net/ 00:06:42.500 00:06:42.500 00:06:42.500 Suite: raid_sb 00:06:42.500 Test: test_raid_bdev_write_superblock ...passed 00:06:42.500 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:42.500 Test: test_raid_bdev_parse_superblock ...[2024-07-24 10:32:08.860456] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:42.500 passed 00:06:42.500 00:06:42.500 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.500 suites 1 1 n/a 0 0 00:06:42.500 tests 3 3 3 0 0 00:06:42.500 asserts 32 32 32 0 n/a 00:06:42.500 00:06:42.500 Elapsed time = 0.002 seconds 00:06:42.500 10:32:08 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:42.500 00:06:42.500 00:06:42.500 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.500 http://cunit.sourceforge.net/ 00:06:42.500 00:06:42.500 00:06:42.500 Suite: concat 00:06:42.500 Test: test_concat_start ...passed 00:06:42.500 Test: test_concat_rw ...passed 00:06:42.500 Test: test_concat_null_payload ...passed 00:06:42.500 00:06:42.500 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.500 suites 1 1 n/a 0 0 00:06:42.500 tests 3 3 3 0 0 00:06:42.500 asserts 8097 8097 8097 0 n/a 00:06:42.500 00:06:42.500 Elapsed time = 0.006 seconds 00:06:42.500 10:32:08 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:42.500 00:06:42.500 00:06:42.500 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.500 http://cunit.sourceforge.net/ 00:06:42.500 00:06:42.500 00:06:42.500 Suite: raid1 00:06:42.500 Test: test_raid1_start ...passed 00:06:42.500 Test: test_raid1_read_balancing ...passed 00:06:42.500 00:06:42.500 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.500 suites 1 1 n/a 0 0 00:06:42.500 tests 2 2 2 0 0 00:06:42.500 asserts 2856 2856 2856 0 n/a 00:06:42.500 00:06:42.500 Elapsed time = 0.003 seconds 00:06:42.500 10:32:08 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:42.500 00:06:42.501 00:06:42.501 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.501 http://cunit.sourceforge.net/ 00:06:42.501 00:06:42.501 00:06:42.501 Suite: zone 00:06:42.501 Test: test_zone_get_operation ...passed 00:06:42.501 Test: test_bdev_zone_get_info ...passed 00:06:42.501 Test: test_bdev_zone_management ...passed 00:06:42.501 Test: test_bdev_zone_append ...passed 00:06:42.501 Test: test_bdev_zone_append_with_md ...passed 00:06:42.501 Test: test_bdev_zone_appendv ...passed 00:06:42.501 Test: test_bdev_zone_appendv_with_md ...passed 00:06:42.501 Test: test_bdev_io_get_append_location ...passed 00:06:42.501 00:06:42.501 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.501 suites 1 1 n/a 0 0 00:06:42.501 tests 8 8 8 0 0 00:06:42.501 asserts 94 94 94 0 n/a 00:06:42.501 00:06:42.501 Elapsed time = 0.001 seconds 00:06:42.501 10:32:08 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:42.501 00:06:42.501 00:06:42.501 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.501 http://cunit.sourceforge.net/ 00:06:42.501 00:06:42.501 00:06:42.501 Suite: gpt_parse 00:06:42.501 Test: test_parse_mbr_and_primary ...[2024-07-24 10:32:08.988955] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:42.501 [2024-07-24 10:32:08.989295] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:42.501 [2024-07-24 10:32:08.989374] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:42.501 [2024-07-24 10:32:08.989465] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:42.501 [2024-07-24 10:32:08.989525] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:42.501 [2024-07-24 10:32:08.989614] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:42.501 passed 00:06:42.501 Test: test_parse_secondary ...[2024-07-24 10:32:08.990257] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:42.501 [2024-07-24 10:32:08.990310] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:42.501 [2024-07-24 10:32:08.990348] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:42.501 [2024-07-24 10:32:08.990386] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:42.501 passed 00:06:42.501 Test: test_check_mbr ...[2024-07-24 10:32:08.991003] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:42.501 [2024-07-24 10:32:08.991088] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:42.501 passed 00:06:42.501 Test: test_read_header ...[2024-07-24 10:32:08.991160] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:42.501 [2024-07-24 10:32:08.991299] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:42.501 [2024-07-24 10:32:08.991385] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:42.501 [2024-07-24 10:32:08.991428] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:42.501 [2024-07-24 10:32:08.991467] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:42.501 passed 00:06:42.501 Test: test_read_partitions ...[2024-07-24 10:32:08.991521] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:42.501 [2024-07-24 10:32:08.991577] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:42.501 [2024-07-24 10:32:08.991626] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:42.501 [2024-07-24 10:32:08.991673] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:42.501 [2024-07-24 10:32:08.991705] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:42.501 [2024-07-24 10:32:08.992016] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:42.501 passed 00:06:42.501 00:06:42.501 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.501 suites 1 1 n/a 0 0 00:06:42.501 tests 5 5 5 0 0 00:06:42.501 asserts 33 33 33 0 n/a 00:06:42.501 00:06:42.501 Elapsed time = 0.004 seconds 00:06:42.501 10:32:09 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:42.501 00:06:42.501 00:06:42.501 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.501 http://cunit.sourceforge.net/ 00:06:42.501 00:06:42.501 00:06:42.501 Suite: bdev_part 00:06:42.501 Test: part_test ...[2024-07-24 10:32:09.031643] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:42.501 passed 00:06:42.501 Test: part_free_test ...passed 00:06:42.501 Test: part_get_io_channel_test ...passed 00:06:42.501 Test: part_construct_ext ...passed 00:06:42.501 00:06:42.501 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.501 suites 1 1 n/a 0 0 00:06:42.501 tests 4 4 4 0 0 00:06:42.501 asserts 48 48 48 0 n/a 00:06:42.501 00:06:42.501 Elapsed time = 0.048 seconds 00:06:42.501 10:32:09 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:42.501 00:06:42.501 00:06:42.501 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.501 http://cunit.sourceforge.net/ 00:06:42.501 00:06:42.501 00:06:42.501 Suite: scsi_nvme_suite 00:06:42.501 Test: scsi_nvme_translate_test ...passed 00:06:42.501 00:06:42.501 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.501 suites 1 1 n/a 0 0 00:06:42.501 tests 1 1 1 0 0 00:06:42.501 asserts 104 104 104 0 n/a 00:06:42.501 00:06:42.501 Elapsed time = 0.000 seconds 00:06:42.501 10:32:09 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:42.501 00:06:42.501 00:06:42.501 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.501 http://cunit.sourceforge.net/ 00:06:42.501 00:06:42.501 00:06:42.501 Suite: lvol 00:06:42.501 Test: ut_lvs_init ...[2024-07-24 10:32:09.145567] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:42.501 passed 00:06:42.501 Test: ut_lvol_init ...[2024-07-24 10:32:09.146401] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:42.501 passed 00:06:42.501 Test: ut_lvol_snapshot ...passed 00:06:42.501 Test: ut_lvol_clone ...passed 00:06:42.501 Test: ut_lvs_destroy ...passed 00:06:42.501 Test: ut_lvs_unload ...passed 00:06:42.501 Test: ut_lvol_resize ...[2024-07-24 10:32:09.149251] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:42.501 passed 00:06:42.501 Test: ut_lvol_set_read_only ...passed 00:06:42.501 Test: ut_lvol_hotremove ...passed 00:06:42.501 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:42.501 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:42.501 Test: ut_lvol_read_write ...passed 00:06:42.501 Test: ut_vbdev_lvol_submit_request ...passed 00:06:42.501 Test: ut_lvol_examine_config ...passed 00:06:42.501 Test: ut_lvol_examine_disk ...[2024-07-24 10:32:09.151129] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:42.501 passed 00:06:42.501 Test: ut_lvol_rename ...[2024-07-24 10:32:09.152776] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:42.501 [2024-07-24 10:32:09.152929] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:42.501 passed 00:06:42.501 Test: ut_bdev_finish ...passed 00:06:42.501 Test: ut_lvs_rename ...passed 00:06:42.501 Test: ut_lvol_seek ...passed 00:06:42.501 Test: ut_esnap_dev_create ...[2024-07-24 10:32:09.154568] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:42.501 [2024-07-24 10:32:09.154663] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:42.501 [2024-07-24 10:32:09.154939] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:42.501 [2024-07-24 10:32:09.155217] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:42.501 passed 00:06:42.501 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-24 10:32:09.155825] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:42.501 [2024-07-24 10:32:09.155964] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:42.501 passed 00:06:42.501 00:06:42.501 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.501 suites 1 1 n/a 0 0 00:06:42.501 tests 21 21 21 0 0 00:06:42.501 asserts 712 712 712 0 n/a 00:06:42.501 00:06:42.501 Elapsed time = 0.011 seconds 00:06:42.502 10:32:09 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:42.761 00:06:42.761 00:06:42.761 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.761 http://cunit.sourceforge.net/ 00:06:42.761 00:06:42.761 00:06:42.761 Suite: zone_block 00:06:42.761 Test: test_zone_block_create ...passed 00:06:42.761 Test: test_zone_block_create_invalid ...[2024-07-24 10:32:09.216556] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:42.761 [2024-07-24 10:32:09.216995] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-24 10:32:09.217209] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:42.761 [2024-07-24 10:32:09.217299] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-24 10:32:09.217512] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:42.761 [2024-07-24 10:32:09.217571] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-24 10:32:09.217689] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:42.761 [2024-07-24 10:32:09.217751] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:42.761 Test: test_get_zone_info ...[2024-07-24 10:32:09.218368] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.218468] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_supported_io_types ...[2024-07-24 10:32:09.218545] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_reset_zone ...[2024-07-24 10:32:09.219570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.219657] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_open_zone ...[2024-07-24 10:32:09.220224] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.220992] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.221075] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_zone_write ...[2024-07-24 10:32:09.221639] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:42.761 [2024-07-24 10:32:09.221734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.221802] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:42.761 [2024-07-24 10:32:09.221862] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.228325] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:42.761 [2024-07-24 10:32:09.228404] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.228474] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:42.761 [2024-07-24 10:32:09.228514] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.234942] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:42.761 passed 00:06:42.761 Test: test_zone_read ...[2024-07-24 10:32:09.235029] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.235558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:42.761 [2024-07-24 10:32:09.235624] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.235712] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:42.761 [2024-07-24 10:32:09.235758] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.236253] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:42.761 [2024-07-24 10:32:09.236305] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_close_zone ...[2024-07-24 10:32:09.236723] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.236853] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.237143] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.237212] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_finish_zone ...[2024-07-24 10:32:09.237902] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 passed 00:06:42.761 Test: test_append_zone ...[2024-07-24 10:32:09.238009] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.238420] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:42.761 [2024-07-24 10:32:09.238493] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.238564] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:42.761 [2024-07-24 10:32:09.238597] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 [2024-07-24 10:32:09.251780] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:42.761 passed 00:06:42.761 00:06:42.761 [2024-07-24 10:32:09.251869] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:42.761 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.761 suites 1 1 n/a 0 0 00:06:42.761 tests 11 11 11 0 0 00:06:42.761 asserts 3437 3437 3437 0 n/a 00:06:42.761 00:06:42.761 Elapsed time = 0.037 seconds 00:06:42.761 10:32:09 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:42.761 00:06:42.761 00:06:42.761 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.761 http://cunit.sourceforge.net/ 00:06:42.761 00:06:42.761 00:06:42.761 Suite: bdev 00:06:42.761 Test: basic ...[2024-07-24 10:32:09.369318] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55f6048b9401): Operation not permitted (rc=-1) 00:06:42.761 [2024-07-24 10:32:09.369775] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55f6048b93c0): Operation not permitted (rc=-1) 00:06:42.761 [2024-07-24 10:32:09.369836] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55f6048b9401): Operation not permitted (rc=-1) 00:06:42.761 passed 00:06:43.019 Test: unregister_and_close ...passed 00:06:43.019 Test: unregister_and_close_different_threads ...passed 00:06:43.019 Test: basic_qos ...passed 00:06:43.019 Test: put_channel_during_reset ...passed 00:06:43.278 Test: aborted_reset ...passed 00:06:43.278 Test: aborted_reset_no_outstanding_io ...passed 00:06:43.278 Test: io_during_reset ...passed 00:06:43.278 Test: reset_completions ...passed 00:06:43.536 Test: io_during_qos_queue ...passed 00:06:43.536 Test: io_during_qos_reset ...passed 00:06:43.536 Test: enomem ...passed 00:06:43.536 Test: enomem_multi_bdev ...passed 00:06:43.536 Test: enomem_multi_bdev_unregister ...passed 00:06:43.794 Test: enomem_multi_io_target ...passed 00:06:43.794 Test: qos_dynamic_enable ...passed 00:06:43.794 Test: bdev_histograms_mt ...passed 00:06:43.794 Test: bdev_set_io_timeout_mt ...[2024-07-24 10:32:10.434153] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:43.794 passed 00:06:43.794 Test: lock_lba_range_then_submit_io ...[2024-07-24 10:32:10.462381] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55f6048b9380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:44.052 passed 00:06:44.052 Test: unregister_during_reset ...passed 00:06:44.052 Test: event_notify_and_close ...passed 00:06:44.052 Test: unregister_and_qos_poller ...passed 00:06:44.052 Suite: bdev_wrong_thread 00:06:44.052 Test: spdk_bdev_register_wt ...[2024-07-24 10:32:10.654643] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:44.052 passed 00:06:44.052 Test: spdk_bdev_examine_wt ...[2024-07-24 10:32:10.655204] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:44.052 passed 00:06:44.052 00:06:44.052 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.052 suites 2 2 n/a 0 0 00:06:44.052 tests 24 24 24 0 0 00:06:44.052 asserts 621 621 621 0 n/a 00:06:44.052 00:06:44.052 Elapsed time = 1.317 seconds 00:06:44.052 00:06:44.052 real 0m4.901s 00:06:44.052 user 0m2.195s 00:06:44.052 sys 0m2.690s 00:06:44.052 10:32:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.052 10:32:10 -- common/autotest_common.sh@10 -- # set +x 00:06:44.052 ************************************ 00:06:44.052 END TEST unittest_bdev 00:06:44.052 ************************************ 00:06:44.052 10:32:10 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:44.314 10:32:10 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:44.314 10:32:10 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:44.314 10:32:10 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:44.314 10:32:10 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:44.314 10:32:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:44.314 10:32:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.314 10:32:10 -- common/autotest_common.sh@10 -- # set +x 00:06:44.314 ************************************ 00:06:44.314 START TEST unittest_bdev_raid5f 00:06:44.314 ************************************ 00:06:44.314 10:32:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:44.314 00:06:44.314 00:06:44.314 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.314 http://cunit.sourceforge.net/ 00:06:44.314 00:06:44.314 00:06:44.314 Suite: raid5f 00:06:44.314 Test: test_raid5f_start ...passed 00:06:44.908 Test: test_raid5f_submit_read_request ...passed 00:06:45.166 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:48.448 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:06.528 Test: test_raid5f_chunk_write_error ...passed 00:07:14.689 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:17.969 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:50.038 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:50.038 00:07:50.038 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.038 suites 1 1 n/a 0 0 00:07:50.038 tests 8 8 8 0 0 00:07:50.038 asserts 351864 351864 351864 0 n/a 00:07:50.038 00:07:50.038 Elapsed time = 61.892 seconds 00:07:50.038 00:07:50.038 real 1m1.987s 00:07:50.038 user 0m58.606s 00:07:50.038 sys 0m3.376s 00:07:50.038 10:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.038 10:33:12 -- common/autotest_common.sh@10 -- # set +x 00:07:50.038 ************************************ 00:07:50.038 END TEST unittest_bdev_raid5f 00:07:50.038 ************************************ 00:07:50.038 10:33:12 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:50.038 10:33:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:50.038 10:33:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.038 10:33:12 -- common/autotest_common.sh@10 -- # set +x 00:07:50.038 ************************************ 00:07:50.038 START TEST unittest_blob_blobfs 00:07:50.038 ************************************ 00:07:50.038 10:33:12 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:50.038 10:33:12 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:50.038 10:33:12 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:50.038 00:07:50.038 00:07:50.038 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.038 http://cunit.sourceforge.net/ 00:07:50.038 00:07:50.038 00:07:50.038 Suite: blob_nocopy_noextent 00:07:50.038 Test: blob_init ...[2024-07-24 10:33:12.830042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:50.038 passed 00:07:50.038 Test: blob_thin_provision ...passed 00:07:50.038 Test: blob_read_only ...passed 00:07:50.038 Test: bs_load ...[2024-07-24 10:33:12.935094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:50.038 passed 00:07:50.038 Test: bs_load_custom_cluster_size ...passed 00:07:50.038 Test: bs_load_after_failed_grow ...passed 00:07:50.038 Test: bs_cluster_sz ...[2024-07-24 10:33:12.971416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:50.038 [2024-07-24 10:33:12.971935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:50.038 [2024-07-24 10:33:12.972192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:50.038 passed 00:07:50.038 Test: bs_resize_md ...passed 00:07:50.038 Test: bs_destroy ...passed 00:07:50.038 Test: bs_type ...passed 00:07:50.038 Test: bs_super_block ...passed 00:07:50.038 Test: bs_test_recover_cluster_count ...passed 00:07:50.038 Test: bs_grow_live ...passed 00:07:50.038 Test: bs_grow_live_no_space ...passed 00:07:50.038 Test: bs_test_grow ...passed 00:07:50.038 Test: blob_serialize_test ...passed 00:07:50.038 Test: super_block_crc ...passed 00:07:50.038 Test: blob_thin_prov_write_count_io ...passed 00:07:50.038 Test: bs_load_iter_test ...passed 00:07:50.038 Test: blob_relations ...[2024-07-24 10:33:13.168333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.038 [2024-07-24 10:33:13.168471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.169443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.038 [2024-07-24 10:33:13.169535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 passed 00:07:50.038 Test: blob_relations2 ...[2024-07-24 10:33:13.186636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.038 [2024-07-24 10:33:13.186747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.186797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.038 [2024-07-24 10:33:13.186821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.188390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.038 [2024-07-24 10:33:13.188456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.188894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.038 [2024-07-24 10:33:13.188955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 passed 00:07:50.038 Test: blob_relations3 ...passed 00:07:50.038 Test: blobstore_clean_power_failure ...passed 00:07:50.038 Test: blob_delete_snapshot_power_failure ...[2024-07-24 10:33:13.381974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.038 [2024-07-24 10:33:13.396896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.038 [2024-07-24 10:33:13.397002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.038 [2024-07-24 10:33:13.397067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.412186] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.038 [2024-07-24 10:33:13.412336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.038 [2024-07-24 10:33:13.412423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.038 [2024-07-24 10:33:13.412486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.427452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:50.038 [2024-07-24 10:33:13.427666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.038 [2024-07-24 10:33:13.442562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:50.039 [2024-07-24 10:33:13.442723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.039 [2024-07-24 10:33:13.457862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:50.039 [2024-07-24 10:33:13.457998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.039 passed 00:07:50.039 Test: blob_create_snapshot_power_failure ...[2024-07-24 10:33:13.502235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.039 [2024-07-24 10:33:13.530875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:50.039 [2024-07-24 10:33:13.545592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:50.039 passed 00:07:50.039 Test: blob_io_unit ...passed 00:07:50.039 Test: blob_io_unit_compatibility ...passed 00:07:50.039 Test: blob_ext_md_pages ...passed 00:07:50.039 Test: blob_esnap_io_4096_4096 ...passed 00:07:50.039 Test: blob_esnap_io_512_512 ...passed 00:07:50.039 Test: blob_esnap_io_4096_512 ...passed 00:07:50.039 Test: blob_esnap_io_512_4096 ...passed 00:07:50.039 Suite: blob_bs_nocopy_noextent 00:07:50.039 Test: blob_open ...passed 00:07:50.039 Test: blob_create ...[2024-07-24 10:33:13.841064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:50.039 passed 00:07:50.039 Test: blob_create_loop ...passed 00:07:50.039 Test: blob_create_fail ...[2024-07-24 10:33:13.954275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.039 passed 00:07:50.039 Test: blob_create_internal ...passed 00:07:50.039 Test: blob_create_zero_extent ...passed 00:07:50.039 Test: blob_snapshot ...passed 00:07:50.039 Test: blob_clone ...passed 00:07:50.039 Test: blob_inflate ...[2024-07-24 10:33:14.180002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:50.039 passed 00:07:50.039 Test: blob_delete ...passed 00:07:50.039 Test: blob_resize_test ...[2024-07-24 10:33:14.260868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:50.039 passed 00:07:50.039 Test: channel_ops ...passed 00:07:50.039 Test: blob_super ...passed 00:07:50.039 Test: blob_rw_verify_iov ...passed 00:07:50.039 Test: blob_unmap ...passed 00:07:50.039 Test: blob_iter ...passed 00:07:50.039 Test: blob_parse_md ...passed 00:07:50.039 Test: bs_load_pending_removal ...passed 00:07:50.039 Test: bs_unload ...[2024-07-24 10:33:14.590040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:50.039 passed 00:07:50.039 Test: bs_usable_clusters ...passed 00:07:50.039 Test: blob_crc ...[2024-07-24 10:33:14.672222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:50.039 [2024-07-24 10:33:14.672413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:50.039 passed 00:07:50.039 Test: blob_flags ...passed 00:07:50.039 Test: bs_version ...passed 00:07:50.039 Test: blob_set_xattrs_test ...[2024-07-24 10:33:14.796293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.039 [2024-07-24 10:33:14.796415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:50.039 passed 00:07:50.039 Test: blob_thin_prov_alloc ...passed 00:07:50.039 Test: blob_insert_cluster_msg_test ...passed 00:07:50.039 Test: blob_thin_prov_rw ...passed 00:07:50.039 Test: blob_thin_prov_rle ...passed 00:07:50.039 Test: blob_thin_prov_rw_iov ...passed 00:07:50.039 Test: blob_snapshot_rw ...passed 00:07:50.039 Test: blob_snapshot_rw_iov ...passed 00:07:50.039 Test: blob_inflate_rw ...passed 00:07:50.039 Test: blob_snapshot_freeze_io ...passed 00:07:50.039 Test: blob_operation_split_rw ...passed 00:07:50.039 Test: blob_operation_split_rw_iov ...passed 00:07:50.039 Test: blob_simultaneous_operations ...[2024-07-24 10:33:15.887646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.039 [2024-07-24 10:33:15.887766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.039 [2024-07-24 10:33:15.888979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.039 [2024-07-24 10:33:15.889039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.039 [2024-07-24 10:33:15.900837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.039 [2024-07-24 10:33:15.900939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.039 [2024-07-24 10:33:15.901095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:50.039 [2024-07-24 10:33:15.901136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.039 passed 00:07:50.039 Test: blob_persist_test ...passed 00:07:50.039 Test: blob_decouple_snapshot ...passed 00:07:50.039 Test: blob_seek_io_unit ...passed 00:07:50.039 Test: blob_nested_freezes ...passed 00:07:50.039 Suite: blob_blob_nocopy_noextent 00:07:50.039 Test: blob_write ...passed 00:07:50.039 Test: blob_read ...passed 00:07:50.039 Test: blob_rw_verify ...passed 00:07:50.039 Test: blob_rw_verify_iov_nomem ...passed 00:07:50.039 Test: blob_rw_iov_read_only ...passed 00:07:50.039 Test: blob_xattr ...passed 00:07:50.039 Test: blob_dirty_shutdown ...passed 00:07:50.039 Test: blob_is_degraded ...passed 00:07:50.039 Suite: blob_esnap_bs_nocopy_noextent 00:07:50.039 Test: blob_esnap_create ...passed 00:07:50.039 Test: blob_esnap_thread_add_remove ...passed 00:07:50.039 Test: blob_esnap_clone_snapshot ...passed 00:07:50.039 Test: blob_esnap_clone_inflate ...passed 00:07:50.039 Test: blob_esnap_clone_decouple ...passed 00:07:50.298 Test: blob_esnap_clone_reload ...passed 00:07:50.298 Test: blob_esnap_hotplug ...passed 00:07:50.298 Suite: blob_nocopy_extent 00:07:50.298 Test: blob_init ...[2024-07-24 10:33:16.757616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:50.298 passed 00:07:50.298 Test: blob_thin_provision ...passed 00:07:50.298 Test: blob_read_only ...passed 00:07:50.298 Test: bs_load ...[2024-07-24 10:33:16.813863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:50.298 passed 00:07:50.298 Test: bs_load_custom_cluster_size ...passed 00:07:50.298 Test: bs_load_after_failed_grow ...passed 00:07:50.298 Test: bs_cluster_sz ...[2024-07-24 10:33:16.844701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:50.298 [2024-07-24 10:33:16.845007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:50.298 [2024-07-24 10:33:16.845075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:50.298 passed 00:07:50.298 Test: bs_resize_md ...passed 00:07:50.298 Test: bs_destroy ...passed 00:07:50.298 Test: bs_type ...passed 00:07:50.298 Test: bs_super_block ...passed 00:07:50.298 Test: bs_test_recover_cluster_count ...passed 00:07:50.298 Test: bs_grow_live ...passed 00:07:50.298 Test: bs_grow_live_no_space ...passed 00:07:50.298 Test: bs_test_grow ...passed 00:07:50.298 Test: blob_serialize_test ...passed 00:07:50.573 Test: super_block_crc ...passed 00:07:50.573 Test: blob_thin_prov_write_count_io ...passed 00:07:50.573 Test: bs_load_iter_test ...passed 00:07:50.573 Test: blob_relations ...[2024-07-24 10:33:17.025190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.573 [2024-07-24 10:33:17.025315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.573 [2024-07-24 10:33:17.026338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.573 [2024-07-24 10:33:17.026424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.573 passed 00:07:50.573 Test: blob_relations2 ...[2024-07-24 10:33:17.044903] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.573 [2024-07-24 10:33:17.045068] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.573 [2024-07-24 10:33:17.045137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.573 [2024-07-24 10:33:17.045205] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.573 [2024-07-24 10:33:17.047730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.573 [2024-07-24 10:33:17.047837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.573 [2024-07-24 10:33:17.048581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:50.573 [2024-07-24 10:33:17.048695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.573 passed 00:07:50.573 Test: blob_relations3 ...passed 00:07:50.832 Test: blobstore_clean_power_failure ...passed 00:07:50.832 Test: blob_delete_snapshot_power_failure ...[2024-07-24 10:33:17.340288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:50.832 [2024-07-24 10:33:17.361129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:50.832 [2024-07-24 10:33:17.381914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:50.832 [2024-07-24 10:33:17.382072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.832 [2024-07-24 10:33:17.382118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.832 [2024-07-24 10:33:17.402677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:50.832 [2024-07-24 10:33:17.402827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.832 [2024-07-24 10:33:17.402880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.832 [2024-07-24 10:33:17.402923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.832 [2024-07-24 10:33:17.423292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:50.832 [2024-07-24 10:33:17.423442] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:50.832 [2024-07-24 10:33:17.423486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:50.832 [2024-07-24 10:33:17.423574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.832 [2024-07-24 10:33:17.444164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:50.832 [2024-07-24 10:33:17.444362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.832 [2024-07-24 10:33:17.464978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:50.832 [2024-07-24 10:33:17.465179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:50.832 [2024-07-24 10:33:17.485994] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:50.832 [2024-07-24 10:33:17.486178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:51.090 passed 00:07:51.090 Test: blob_create_snapshot_power_failure ...[2024-07-24 10:33:17.548196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:51.090 [2024-07-24 10:33:17.568549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:51.090 [2024-07-24 10:33:17.611540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:51.090 [2024-07-24 10:33:17.634208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:51.090 passed 00:07:51.090 Test: blob_io_unit ...passed 00:07:51.090 Test: blob_io_unit_compatibility ...passed 00:07:51.090 Test: blob_ext_md_pages ...passed 00:07:51.349 Test: blob_esnap_io_4096_4096 ...passed 00:07:51.349 Test: blob_esnap_io_512_512 ...passed 00:07:51.349 Test: blob_esnap_io_4096_512 ...passed 00:07:51.349 Test: blob_esnap_io_512_4096 ...passed 00:07:51.349 Suite: blob_bs_nocopy_extent 00:07:51.349 Test: blob_open ...passed 00:07:51.607 Test: blob_create ...[2024-07-24 10:33:18.033161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:51.607 passed 00:07:51.607 Test: blob_create_loop ...passed 00:07:51.607 Test: blob_create_fail ...[2024-07-24 10:33:18.191779] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:51.607 passed 00:07:51.607 Test: blob_create_internal ...passed 00:07:51.873 Test: blob_create_zero_extent ...passed 00:07:51.873 Test: blob_snapshot ...passed 00:07:51.873 Test: blob_clone ...passed 00:07:51.873 Test: blob_inflate ...[2024-07-24 10:33:18.498561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:51.873 passed 00:07:52.134 Test: blob_delete ...passed 00:07:52.134 Test: blob_resize_test ...[2024-07-24 10:33:18.611904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:52.134 passed 00:07:52.134 Test: channel_ops ...passed 00:07:52.134 Test: blob_super ...passed 00:07:52.405 Test: blob_rw_verify_iov ...passed 00:07:52.405 Test: blob_unmap ...passed 00:07:52.405 Test: blob_iter ...passed 00:07:52.405 Test: blob_parse_md ...passed 00:07:52.405 Test: bs_load_pending_removal ...passed 00:07:52.663 Test: bs_unload ...[2024-07-24 10:33:19.086045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:52.663 passed 00:07:52.663 Test: bs_usable_clusters ...passed 00:07:52.663 Test: blob_crc ...[2024-07-24 10:33:19.204079] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:52.663 [2024-07-24 10:33:19.204290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:52.663 passed 00:07:52.663 Test: blob_flags ...passed 00:07:52.920 Test: bs_version ...passed 00:07:52.920 Test: blob_set_xattrs_test ...[2024-07-24 10:33:19.389121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:52.920 [2024-07-24 10:33:19.389290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:52.920 passed 00:07:52.920 Test: blob_thin_prov_alloc ...passed 00:07:53.177 Test: blob_insert_cluster_msg_test ...passed 00:07:53.177 Test: blob_thin_prov_rw ...passed 00:07:53.177 Test: blob_thin_prov_rle ...passed 00:07:53.177 Test: blob_thin_prov_rw_iov ...passed 00:07:53.434 Test: blob_snapshot_rw ...passed 00:07:53.434 Test: blob_snapshot_rw_iov ...passed 00:07:53.691 Test: blob_inflate_rw ...passed 00:07:53.691 Test: blob_snapshot_freeze_io ...passed 00:07:53.948 Test: blob_operation_split_rw ...passed 00:07:54.205 Test: blob_operation_split_rw_iov ...passed 00:07:54.206 Test: blob_simultaneous_operations ...[2024-07-24 10:33:20.700377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.206 [2024-07-24 10:33:20.700524] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.206 [2024-07-24 10:33:20.701865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.206 [2024-07-24 10:33:20.701927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.206 [2024-07-24 10:33:20.715489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.206 [2024-07-24 10:33:20.715650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.206 [2024-07-24 10:33:20.715821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:54.206 [2024-07-24 10:33:20.715856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:54.206 passed 00:07:54.206 Test: blob_persist_test ...passed 00:07:54.462 Test: blob_decouple_snapshot ...passed 00:07:54.462 Test: blob_seek_io_unit ...passed 00:07:54.462 Test: blob_nested_freezes ...passed 00:07:54.462 Suite: blob_blob_nocopy_extent 00:07:54.462 Test: blob_write ...passed 00:07:54.721 Test: blob_read ...passed 00:07:54.721 Test: blob_rw_verify ...passed 00:07:54.721 Test: blob_rw_verify_iov_nomem ...passed 00:07:54.721 Test: blob_rw_iov_read_only ...passed 00:07:54.979 Test: blob_xattr ...passed 00:07:54.979 Test: blob_dirty_shutdown ...passed 00:07:54.979 Test: blob_is_degraded ...passed 00:07:54.979 Suite: blob_esnap_bs_nocopy_extent 00:07:54.979 Test: blob_esnap_create ...passed 00:07:54.979 Test: blob_esnap_thread_add_remove ...passed 00:07:55.236 Test: blob_esnap_clone_snapshot ...passed 00:07:55.236 Test: blob_esnap_clone_inflate ...passed 00:07:55.236 Test: blob_esnap_clone_decouple ...passed 00:07:55.236 Test: blob_esnap_clone_reload ...passed 00:07:55.494 Test: blob_esnap_hotplug ...passed 00:07:55.494 Suite: blob_copy_noextent 00:07:55.494 Test: blob_init ...[2024-07-24 10:33:21.942893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:55.494 passed 00:07:55.494 Test: blob_thin_provision ...passed 00:07:55.494 Test: blob_read_only ...passed 00:07:55.494 Test: bs_load ...[2024-07-24 10:33:22.026170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:55.494 passed 00:07:55.494 Test: bs_load_custom_cluster_size ...passed 00:07:55.494 Test: bs_load_after_failed_grow ...passed 00:07:55.494 Test: bs_cluster_sz ...[2024-07-24 10:33:22.068110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:55.494 [2024-07-24 10:33:22.068403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:55.494 [2024-07-24 10:33:22.068466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:55.494 passed 00:07:55.494 Test: bs_resize_md ...passed 00:07:55.494 Test: bs_destroy ...passed 00:07:55.494 Test: bs_type ...passed 00:07:55.752 Test: bs_super_block ...passed 00:07:55.752 Test: bs_test_recover_cluster_count ...passed 00:07:55.752 Test: bs_grow_live ...passed 00:07:55.752 Test: bs_grow_live_no_space ...passed 00:07:55.752 Test: bs_test_grow ...passed 00:07:55.752 Test: blob_serialize_test ...passed 00:07:55.752 Test: super_block_crc ...passed 00:07:55.752 Test: blob_thin_prov_write_count_io ...passed 00:07:55.752 Test: bs_load_iter_test ...passed 00:07:55.752 Test: blob_relations ...[2024-07-24 10:33:22.317022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.752 [2024-07-24 10:33:22.317187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.752 [2024-07-24 10:33:22.317873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.752 [2024-07-24 10:33:22.317930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.752 passed 00:07:55.752 Test: blob_relations2 ...[2024-07-24 10:33:22.339325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.752 [2024-07-24 10:33:22.339480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.752 [2024-07-24 10:33:22.339541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.752 [2024-07-24 10:33:22.339564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.752 [2024-07-24 10:33:22.340637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.752 [2024-07-24 10:33:22.340723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.752 [2024-07-24 10:33:22.341063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:55.752 [2024-07-24 10:33:22.341128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:55.752 passed 00:07:55.752 Test: blob_relations3 ...passed 00:07:56.010 Test: blobstore_clean_power_failure ...passed 00:07:56.010 Test: blob_delete_snapshot_power_failure ...[2024-07-24 10:33:22.612604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:56.010 [2024-07-24 10:33:22.632401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:56.010 [2024-07-24 10:33:22.632568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:56.010 [2024-07-24 10:33:22.632610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.010 [2024-07-24 10:33:22.652325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:56.010 [2024-07-24 10:33:22.652477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:56.010 [2024-07-24 10:33:22.652529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:56.010 [2024-07-24 10:33:22.652562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.010 [2024-07-24 10:33:22.672784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:56.010 [2024-07-24 10:33:22.672981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.267 [2024-07-24 10:33:22.693006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:56.268 [2024-07-24 10:33:22.693204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.268 [2024-07-24 10:33:22.713234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:56.268 [2024-07-24 10:33:22.713416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:56.268 passed 00:07:56.268 Test: blob_create_snapshot_power_failure ...[2024-07-24 10:33:22.773382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:56.268 [2024-07-24 10:33:22.812636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:56.268 [2024-07-24 10:33:22.832510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:56.268 passed 00:07:56.268 Test: blob_io_unit ...passed 00:07:56.268 Test: blob_io_unit_compatibility ...passed 00:07:56.525 Test: blob_ext_md_pages ...passed 00:07:56.525 Test: blob_esnap_io_4096_4096 ...passed 00:07:56.525 Test: blob_esnap_io_512_512 ...passed 00:07:56.525 Test: blob_esnap_io_4096_512 ...passed 00:07:56.525 Test: blob_esnap_io_512_4096 ...passed 00:07:56.525 Suite: blob_bs_copy_noextent 00:07:56.525 Test: blob_open ...passed 00:07:56.782 Test: blob_create ...[2024-07-24 10:33:23.230086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:56.782 passed 00:07:56.782 Test: blob_create_loop ...passed 00:07:56.782 Test: blob_create_fail ...[2024-07-24 10:33:23.374731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:56.783 passed 00:07:56.783 Test: blob_create_internal ...passed 00:07:57.039 Test: blob_create_zero_extent ...passed 00:07:57.039 Test: blob_snapshot ...passed 00:07:57.039 Test: blob_clone ...passed 00:07:57.039 Test: blob_inflate ...[2024-07-24 10:33:23.670729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:57.039 passed 00:07:57.297 Test: blob_delete ...passed 00:07:57.297 Test: blob_resize_test ...[2024-07-24 10:33:23.782545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:57.297 passed 00:07:57.297 Test: channel_ops ...passed 00:07:57.297 Test: blob_super ...passed 00:07:57.297 Test: blob_rw_verify_iov ...passed 00:07:57.555 Test: blob_unmap ...passed 00:07:57.555 Test: blob_iter ...passed 00:07:57.555 Test: blob_parse_md ...passed 00:07:57.555 Test: bs_load_pending_removal ...passed 00:07:57.813 Test: bs_unload ...[2024-07-24 10:33:24.243453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:57.813 passed 00:07:57.813 Test: bs_usable_clusters ...passed 00:07:57.813 Test: blob_crc ...[2024-07-24 10:33:24.359930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:57.813 [2024-07-24 10:33:24.360080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:57.813 passed 00:07:57.813 Test: blob_flags ...passed 00:07:58.071 Test: bs_version ...passed 00:07:58.071 Test: blob_set_xattrs_test ...[2024-07-24 10:33:24.532694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.071 [2024-07-24 10:33:24.532869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:58.071 passed 00:07:58.071 Test: blob_thin_prov_alloc ...passed 00:07:58.329 Test: blob_insert_cluster_msg_test ...passed 00:07:58.329 Test: blob_thin_prov_rw ...passed 00:07:58.329 Test: blob_thin_prov_rle ...passed 00:07:58.329 Test: blob_thin_prov_rw_iov ...passed 00:07:58.587 Test: blob_snapshot_rw ...passed 00:07:58.587 Test: blob_snapshot_rw_iov ...passed 00:07:58.845 Test: blob_inflate_rw ...passed 00:07:58.845 Test: blob_snapshot_freeze_io ...passed 00:07:59.103 Test: blob_operation_split_rw ...passed 00:07:59.103 Test: blob_operation_split_rw_iov ...passed 00:07:59.361 Test: blob_simultaneous_operations ...[2024-07-24 10:33:25.813035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.361 [2024-07-24 10:33:25.813182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.361 [2024-07-24 10:33:25.813785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.361 [2024-07-24 10:33:25.813837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.361 [2024-07-24 10:33:25.817275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.361 [2024-07-24 10:33:25.817345] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.361 [2024-07-24 10:33:25.817467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:59.361 [2024-07-24 10:33:25.817500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:59.361 passed 00:07:59.362 Test: blob_persist_test ...passed 00:07:59.362 Test: blob_decouple_snapshot ...passed 00:07:59.362 Test: blob_seek_io_unit ...passed 00:07:59.620 Test: blob_nested_freezes ...passed 00:07:59.620 Suite: blob_blob_copy_noextent 00:07:59.620 Test: blob_write ...passed 00:07:59.620 Test: blob_read ...passed 00:07:59.620 Test: blob_rw_verify ...passed 00:07:59.879 Test: blob_rw_verify_iov_nomem ...passed 00:07:59.879 Test: blob_rw_iov_read_only ...passed 00:07:59.879 Test: blob_xattr ...passed 00:07:59.879 Test: blob_dirty_shutdown ...passed 00:08:00.137 Test: blob_is_degraded ...passed 00:08:00.137 Suite: blob_esnap_bs_copy_noextent 00:08:00.137 Test: blob_esnap_create ...passed 00:08:00.137 Test: blob_esnap_thread_add_remove ...passed 00:08:00.137 Test: blob_esnap_clone_snapshot ...passed 00:08:00.137 Test: blob_esnap_clone_inflate ...passed 00:08:00.396 Test: blob_esnap_clone_decouple ...passed 00:08:00.396 Test: blob_esnap_clone_reload ...passed 00:08:00.396 Test: blob_esnap_hotplug ...passed 00:08:00.396 Suite: blob_copy_extent 00:08:00.396 Test: blob_init ...[2024-07-24 10:33:26.967055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:00.396 passed 00:08:00.396 Test: blob_thin_provision ...passed 00:08:00.396 Test: blob_read_only ...passed 00:08:00.396 Test: bs_load ...[2024-07-24 10:33:27.044286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:00.396 passed 00:08:00.396 Test: bs_load_custom_cluster_size ...passed 00:08:00.655 Test: bs_load_after_failed_grow ...passed 00:08:00.655 Test: bs_cluster_sz ...[2024-07-24 10:33:27.085260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:00.655 [2024-07-24 10:33:27.085522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:00.655 [2024-07-24 10:33:27.085578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:00.655 passed 00:08:00.655 Test: bs_resize_md ...passed 00:08:00.655 Test: bs_destroy ...passed 00:08:00.655 Test: bs_type ...passed 00:08:00.655 Test: bs_super_block ...passed 00:08:00.655 Test: bs_test_recover_cluster_count ...passed 00:08:00.655 Test: bs_grow_live ...passed 00:08:00.655 Test: bs_grow_live_no_space ...passed 00:08:00.655 Test: bs_test_grow ...passed 00:08:00.655 Test: blob_serialize_test ...passed 00:08:00.655 Test: super_block_crc ...passed 00:08:00.655 Test: blob_thin_prov_write_count_io ...passed 00:08:00.655 Test: bs_load_iter_test ...passed 00:08:00.912 Test: blob_relations ...[2024-07-24 10:33:27.334251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.912 [2024-07-24 10:33:27.334421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.912 [2024-07-24 10:33:27.335554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.912 [2024-07-24 10:33:27.335645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.912 passed 00:08:00.912 Test: blob_relations2 ...[2024-07-24 10:33:27.357896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.912 [2024-07-24 10:33:27.358052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.912 [2024-07-24 10:33:27.358124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.912 [2024-07-24 10:33:27.358161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.912 [2024-07-24 10:33:27.359797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.913 [2024-07-24 10:33:27.359870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.913 [2024-07-24 10:33:27.360362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:00.913 [2024-07-24 10:33:27.360447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:00.913 passed 00:08:00.913 Test: blob_relations3 ...passed 00:08:01.170 Test: blobstore_clean_power_failure ...passed 00:08:01.170 Test: blob_delete_snapshot_power_failure ...[2024-07-24 10:33:27.632035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:01.170 [2024-07-24 10:33:27.652715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:01.170 [2024-07-24 10:33:27.673348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:01.170 [2024-07-24 10:33:27.673532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.170 [2024-07-24 10:33:27.673577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.170 [2024-07-24 10:33:27.697356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:01.170 [2024-07-24 10:33:27.697497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:01.170 [2024-07-24 10:33:27.697530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.170 [2024-07-24 10:33:27.697567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.170 [2024-07-24 10:33:27.717432] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:01.170 [2024-07-24 10:33:27.717589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:01.170 [2024-07-24 10:33:27.717621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:01.170 [2024-07-24 10:33:27.717656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.170 [2024-07-24 10:33:27.737780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:01.170 [2024-07-24 10:33:27.737943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.170 [2024-07-24 10:33:27.757850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:01.170 [2024-07-24 10:33:27.758028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.170 [2024-07-24 10:33:27.778041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:01.171 [2024-07-24 10:33:27.778206] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:01.171 passed 00:08:01.171 Test: blob_create_snapshot_power_failure ...[2024-07-24 10:33:27.837984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:01.429 [2024-07-24 10:33:27.858135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:01.429 [2024-07-24 10:33:27.897608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:01.429 [2024-07-24 10:33:27.917893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:01.429 passed 00:08:01.429 Test: blob_io_unit ...passed 00:08:01.429 Test: blob_io_unit_compatibility ...passed 00:08:01.429 Test: blob_ext_md_pages ...passed 00:08:01.429 Test: blob_esnap_io_4096_4096 ...passed 00:08:01.689 Test: blob_esnap_io_512_512 ...passed 00:08:01.689 Test: blob_esnap_io_4096_512 ...passed 00:08:01.689 Test: blob_esnap_io_512_4096 ...passed 00:08:01.689 Suite: blob_bs_copy_extent 00:08:01.689 Test: blob_open ...passed 00:08:01.689 Test: blob_create ...[2024-07-24 10:33:28.312600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:01.689 passed 00:08:01.949 Test: blob_create_loop ...passed 00:08:01.949 Test: blob_create_fail ...[2024-07-24 10:33:28.465326] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:01.949 passed 00:08:01.949 Test: blob_create_internal ...passed 00:08:01.949 Test: blob_create_zero_extent ...passed 00:08:02.207 Test: blob_snapshot ...passed 00:08:02.207 Test: blob_clone ...passed 00:08:02.207 Test: blob_inflate ...[2024-07-24 10:33:28.765838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:02.207 passed 00:08:02.207 Test: blob_delete ...passed 00:08:02.207 Test: blob_resize_test ...[2024-07-24 10:33:28.883677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:02.466 passed 00:08:02.466 Test: channel_ops ...passed 00:08:02.466 Test: blob_super ...passed 00:08:02.466 Test: blob_rw_verify_iov ...passed 00:08:02.466 Test: blob_unmap ...passed 00:08:02.725 Test: blob_iter ...passed 00:08:02.725 Test: blob_parse_md ...passed 00:08:02.725 Test: bs_load_pending_removal ...passed 00:08:02.725 Test: bs_unload ...[2024-07-24 10:33:29.343923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:02.725 passed 00:08:02.983 Test: bs_usable_clusters ...passed 00:08:02.983 Test: blob_crc ...[2024-07-24 10:33:29.459834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:02.983 [2024-07-24 10:33:29.460021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:02.983 passed 00:08:02.983 Test: blob_flags ...passed 00:08:02.983 Test: bs_version ...passed 00:08:02.983 Test: blob_set_xattrs_test ...[2024-07-24 10:33:29.640949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.983 [2024-07-24 10:33:29.641106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:02.983 passed 00:08:03.241 Test: blob_thin_prov_alloc ...passed 00:08:03.241 Test: blob_insert_cluster_msg_test ...passed 00:08:03.499 Test: blob_thin_prov_rw ...passed 00:08:03.499 Test: blob_thin_prov_rle ...passed 00:08:03.499 Test: blob_thin_prov_rw_iov ...passed 00:08:03.499 Test: blob_snapshot_rw ...passed 00:08:03.499 Test: blob_snapshot_rw_iov ...passed 00:08:04.066 Test: blob_inflate_rw ...passed 00:08:04.066 Test: blob_snapshot_freeze_io ...passed 00:08:04.066 Test: blob_operation_split_rw ...passed 00:08:04.324 Test: blob_operation_split_rw_iov ...passed 00:08:04.324 Test: blob_simultaneous_operations ...[2024-07-24 10:33:30.934852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:04.324 [2024-07-24 10:33:30.935025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.324 [2024-07-24 10:33:30.935744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:04.324 [2024-07-24 10:33:30.935794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.324 [2024-07-24 10:33:30.939719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:04.324 [2024-07-24 10:33:30.939791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.324 [2024-07-24 10:33:30.939941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:04.324 [2024-07-24 10:33:30.939991] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:04.324 passed 00:08:04.583 Test: blob_persist_test ...passed 00:08:04.583 Test: blob_decouple_snapshot ...passed 00:08:04.583 Test: blob_seek_io_unit ...passed 00:08:04.583 Test: blob_nested_freezes ...passed 00:08:04.583 Suite: blob_blob_copy_extent 00:08:04.840 Test: blob_write ...passed 00:08:04.840 Test: blob_read ...passed 00:08:04.840 Test: blob_rw_verify ...passed 00:08:04.840 Test: blob_rw_verify_iov_nomem ...passed 00:08:05.098 Test: blob_rw_iov_read_only ...passed 00:08:05.098 Test: blob_xattr ...passed 00:08:05.098 Test: blob_dirty_shutdown ...passed 00:08:05.098 Test: blob_is_degraded ...passed 00:08:05.098 Suite: blob_esnap_bs_copy_extent 00:08:05.372 Test: blob_esnap_create ...passed 00:08:05.372 Test: blob_esnap_thread_add_remove ...passed 00:08:05.372 Test: blob_esnap_clone_snapshot ...passed 00:08:05.372 Test: blob_esnap_clone_inflate ...passed 00:08:05.372 Test: blob_esnap_clone_decouple ...passed 00:08:05.630 Test: blob_esnap_clone_reload ...passed 00:08:05.630 Test: blob_esnap_hotplug ...passed 00:08:05.630 00:08:05.630 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.630 suites 16 16 n/a 0 0 00:08:05.630 tests 348 348 348 0 0 00:08:05.630 asserts 92605 92605 92605 0 n/a 00:08:05.630 00:08:05.630 Elapsed time = 19.259 seconds 00:08:05.630 10:33:32 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:05.630 00:08:05.630 00:08:05.630 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.630 http://cunit.sourceforge.net/ 00:08:05.630 00:08:05.630 00:08:05.630 Suite: blob_bdev 00:08:05.630 Test: create_bs_dev ...passed 00:08:05.630 Test: create_bs_dev_ro ...[2024-07-24 10:33:32.192485] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:05.630 passed 00:08:05.630 Test: create_bs_dev_rw ...passed 00:08:05.630 Test: claim_bs_dev ...[2024-07-24 10:33:32.193018] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:05.630 passed 00:08:05.630 Test: claim_bs_dev_ro ...passed 00:08:05.630 Test: deferred_destroy_refs ...passed 00:08:05.630 Test: deferred_destroy_channels ...passed 00:08:05.630 Test: deferred_destroy_threads ...passed 00:08:05.630 00:08:05.630 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.630 suites 1 1 n/a 0 0 00:08:05.630 tests 8 8 8 0 0 00:08:05.630 asserts 119 119 119 0 n/a 00:08:05.630 00:08:05.630 Elapsed time = 0.001 seconds 00:08:05.630 10:33:32 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:05.630 00:08:05.630 00:08:05.630 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.630 http://cunit.sourceforge.net/ 00:08:05.630 00:08:05.630 00:08:05.630 Suite: tree 00:08:05.630 Test: blobfs_tree_op_test ...passed 00:08:05.630 00:08:05.630 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.630 suites 1 1 n/a 0 0 00:08:05.630 tests 1 1 1 0 0 00:08:05.630 asserts 27 27 27 0 n/a 00:08:05.630 00:08:05.630 Elapsed time = 0.000 seconds 00:08:05.630 10:33:32 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:05.630 00:08:05.630 00:08:05.630 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.630 http://cunit.sourceforge.net/ 00:08:05.630 00:08:05.630 00:08:05.630 Suite: blobfs_async_ut 00:08:05.888 Test: fs_init ...passed 00:08:05.888 Test: fs_open ...passed 00:08:05.888 Test: fs_create ...passed 00:08:05.888 Test: fs_truncate ...passed 00:08:05.888 Test: fs_rename ...[2024-07-24 10:33:32.430157] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:05.888 passed 00:08:05.888 Test: fs_rw_async ...passed 00:08:05.888 Test: fs_writev_readv_async ...passed 00:08:05.888 Test: tree_find_buffer_ut ...passed 00:08:05.888 Test: channel_ops ...passed 00:08:05.888 Test: channel_ops_sync ...passed 00:08:05.888 00:08:05.888 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.888 suites 1 1 n/a 0 0 00:08:05.888 tests 10 10 10 0 0 00:08:05.888 asserts 292 292 292 0 n/a 00:08:05.888 00:08:05.888 Elapsed time = 0.212 seconds 00:08:05.888 10:33:32 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:05.888 00:08:05.888 00:08:05.888 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.888 http://cunit.sourceforge.net/ 00:08:05.888 00:08:05.888 00:08:05.888 Suite: blobfs_sync_ut 00:08:06.147 Test: cache_read_after_write ...[2024-07-24 10:33:32.625669] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:06.147 passed 00:08:06.147 Test: file_length ...passed 00:08:06.147 Test: append_write_to_extend_blob ...passed 00:08:06.147 Test: partial_buffer ...passed 00:08:06.147 Test: cache_write_null_buffer ...passed 00:08:06.147 Test: fs_create_sync ...passed 00:08:06.147 Test: fs_rename_sync ...passed 00:08:06.147 Test: cache_append_no_cache ...passed 00:08:06.147 Test: fs_delete_file_without_close ...passed 00:08:06.147 00:08:06.147 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.147 suites 1 1 n/a 0 0 00:08:06.147 tests 9 9 9 0 0 00:08:06.147 asserts 345 345 345 0 n/a 00:08:06.147 00:08:06.147 Elapsed time = 0.486 seconds 00:08:06.407 10:33:32 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:06.407 00:08:06.407 00:08:06.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.407 http://cunit.sourceforge.net/ 00:08:06.407 00:08:06.407 00:08:06.407 Suite: blobfs_bdev_ut 00:08:06.407 Test: spdk_blobfs_bdev_detect_test ...[2024-07-24 10:33:32.865564] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:06.407 passed 00:08:06.407 Test: spdk_blobfs_bdev_create_test ...passed 00:08:06.407 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:06.407 00:08:06.407 [2024-07-24 10:33:32.865920] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:06.407 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.407 suites 1 1 n/a 0 0 00:08:06.407 tests 3 3 3 0 0 00:08:06.407 asserts 9 9 9 0 n/a 00:08:06.407 00:08:06.407 Elapsed time = 0.001 seconds 00:08:06.407 00:08:06.407 real 0m20.079s 00:08:06.407 user 0m19.525s 00:08:06.407 sys 0m0.807s 00:08:06.407 10:33:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.407 10:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:06.407 ************************************ 00:08:06.407 END TEST unittest_blob_blobfs 00:08:06.407 ************************************ 00:08:06.407 10:33:32 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:08:06.407 10:33:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.407 10:33:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.407 10:33:32 -- common/autotest_common.sh@10 -- # set +x 00:08:06.407 ************************************ 00:08:06.407 START TEST unittest_event 00:08:06.407 ************************************ 00:08:06.407 10:33:32 -- common/autotest_common.sh@1104 -- # unittest_event 00:08:06.407 10:33:32 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:06.407 00:08:06.407 00:08:06.407 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.407 http://cunit.sourceforge.net/ 00:08:06.407 00:08:06.407 00:08:06.407 Suite: app_suite 00:08:06.407 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:06.407 options: 00:08:06.407 -c, --config JSON config file (default none) 00:08:06.407 --json JSON config file (default none) 00:08:06.407 --json-ignore-init-errors 00:08:06.407 don't exit on invalid config entry 00:08:06.407 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:06.407 -g, --single-file-segments 00:08:06.407 force creating just one hugetlbfs file 00:08:06.407 -h, --help show this usage 00:08:06.408 -i, --shm-id shared memory ID (optional) 00:08:06.408 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:06.408 --lcores lcore to CPU mapping list. The list is in the format: 00:08:06.408 [<,lcores[@CPUs]>...] 00:08:06.408 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:06.408 Within the group, '-' is used for range separator, 00:08:06.408 ',' is used for single number separator. 00:08:06.408 '( )' can be omitted for single element group, 00:08:06.408 '@' can be omitted if cpus and lcores have the same value 00:08:06.408 -n, --mem-channels channel number of memory channels used for DPDK 00:08:06.408 -p, --main-core main (primary) core for DPDK 00:08:06.408 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:06.408 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:06.408 --disable-cpumask-locks Disable CPU core lock files. 00:08:06.408 --silence-noticelog disable notice level logging to stderr 00:08:06.408 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:06.408 -u, --no-pci disable PCI access 00:08:06.408 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:06.408 --max-delay maximum reactor delay (in microseconds) 00:08:06.408 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:06.408 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:06.408 -R, --huge-unlink unlink huge files after initialization 00:08:06.408 -v, --version print SPDK version 00:08:06.408 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:06.408 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:06.408 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:06.408 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:06.408 Tracepoints vary in size and can use more than one trace entry. 00:08:06.408 --rpcs-allowed comma-separated list of permitted RPCS 00:08:06.408 --env-context Opaque context for use of the env implementation 00:08:06.408 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:06.408 --no-huge run without using hugepages 00:08:06.408 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:06.408 -e, --tpoint-group [:] 00:08:06.408 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:06.408 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:06.408 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:06.408 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:06.408 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:06.408 app_ut [options] 00:08:06.408 options:app_ut: invalid option -- 'z' 00:08:06.408 app_ut: unrecognized option '--test-long-opt' 00:08:06.408 00:08:06.408 -c, --config JSON config file (default none) 00:08:06.408 --json JSON config file (default none) 00:08:06.408 --json-ignore-init-errors 00:08:06.408 don't exit on invalid config entry 00:08:06.408 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:06.408 -g, --single-file-segments 00:08:06.408 force creating just one hugetlbfs file 00:08:06.408 -h, --help show this usage 00:08:06.408 -i, --shm-id shared memory ID (optional) 00:08:06.408 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:06.408 --lcores lcore to CPU mapping list. The list is in the format: 00:08:06.408 [<,lcores[@CPUs]>...] 00:08:06.408 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:06.408 Within the group, '-' is used for range separator, 00:08:06.408 ',' is used for single number separator. 00:08:06.408 '( )' can be omitted for single element group, 00:08:06.408 '@' can be omitted if cpus and lcores have the same value 00:08:06.408 -n, --mem-channels channel number of memory channels used for DPDK 00:08:06.408 -p, --main-core main (primary) core for DPDK 00:08:06.408 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:06.408 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:06.408 --disable-cpumask-locks Disable CPU core lock files. 00:08:06.408 --silence-noticelog disable notice level logging to stderr 00:08:06.408 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:06.408 -u, --no-pci disable PCI access 00:08:06.408 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:06.408 --max-delay maximum reactor delay (in microseconds) 00:08:06.408 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:06.408 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:06.408 -R, --huge-unlink unlink huge files after initialization 00:08:06.408 -v, --version print SPDK version 00:08:06.408 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:06.408 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:06.408 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:06.408 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:06.408 Tracepoints vary in size and can use more than one trace entry. 00:08:06.408 --rpcs-allowed comma-separated list of permitted RPCS 00:08:06.408 --env-context Opaque context for use of the env implementation 00:08:06.408 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:06.408 --no-huge run without using hugepages 00:08:06.408 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:06.408 -e, --tpoint-group [:] 00:08:06.408 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:06.408 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:06.408 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:06.408 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:06.408 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:06.408 [2024-07-24 10:33:32.944883] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:06.408 [2024-07-24 10:33:32.945263] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:06.408 app_ut [options] 00:08:06.408 options: 00:08:06.408 -c, --config JSON config file (default none) 00:08:06.408 --json JSON config file (default none) 00:08:06.408 --json-ignore-init-errors 00:08:06.408 don't exit on invalid config entry 00:08:06.408 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:06.408 -g, --single-file-segments 00:08:06.408 force creating just one hugetlbfs file 00:08:06.408 -h, --help show this usage 00:08:06.408 -i, --shm-id shared memory ID (optional) 00:08:06.408 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:06.408 --lcores lcore to CPU mapping list. The list is in the format: 00:08:06.408 [<,lcores[@CPUs]>...] 00:08:06.408 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:06.408 Within the group, '-' is used for range separator, 00:08:06.408 ',' is used for single number separator. 00:08:06.408 '( )' can be omitted for single element group, 00:08:06.408 '@' can be omitted if cpus and lcores have the same value 00:08:06.408 -n, --mem-channels channel number of memory channels used for DPDK 00:08:06.408 -p, --main-core main (primary) core for DPDK 00:08:06.408 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:06.408 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:06.408 --disable-cpumask-locks Disable CPU core lock files. 00:08:06.408 --silence-noticelog disable notice level logging to stderr 00:08:06.408 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:06.408 -u, --no-pci disable PCI access 00:08:06.408 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:06.408 --max-delay maximum reactor delay (in microseconds) 00:08:06.408 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:06.408 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:06.408 -R, --huge-unlink unlink huge files after initialization 00:08:06.408 -v, --version print SPDK version 00:08:06.408 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:06.408 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:06.408 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:06.408 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:06.408 Tracepoints vary in size and can use more than one trace entry. 00:08:06.408 --rpcs-allowed comma-separated list of permitted RPCS 00:08:06.408 --env-context Opaque context for use of the env implementation 00:08:06.408 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:06.409 --no-huge run without using hugepages 00:08:06.409 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:08:06.409 -e, --tpoint-group [:] 00:08:06.409 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:08:06.409 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:06.409 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:08:06.409 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:06.409 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:06.409 passed 00:08:06.409 00:08:06.409 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.409 suites 1 1 n/a 0 0 00:08:06.409 tests 1 1 1 0 0 00:08:06.409 asserts 8 8 8 0 n/a 00:08:06.409 00:08:06.409 Elapsed time = 0.001 seconds 00:08:06.409 [2024-07-24 10:33:32.945497] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:06.409 10:33:32 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:06.409 00:08:06.409 00:08:06.409 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.409 http://cunit.sourceforge.net/ 00:08:06.409 00:08:06.409 00:08:06.409 Suite: app_suite 00:08:06.409 Test: test_create_reactor ...passed 00:08:06.409 Test: test_init_reactors ...passed 00:08:06.409 Test: test_event_call ...passed 00:08:06.409 Test: test_schedule_thread ...passed 00:08:06.409 Test: test_reschedule_thread ...passed 00:08:06.409 Test: test_bind_thread ...passed 00:08:06.409 Test: test_for_each_reactor ...passed 00:08:06.409 Test: test_reactor_stats ...passed 00:08:06.409 Test: test_scheduler ...passed 00:08:06.409 Test: test_governor ...passed 00:08:06.409 00:08:06.409 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.409 suites 1 1 n/a 0 0 00:08:06.409 tests 10 10 10 0 0 00:08:06.409 asserts 344 344 344 0 n/a 00:08:06.409 00:08:06.409 Elapsed time = 0.016 seconds 00:08:06.409 00:08:06.409 real 0m0.084s 00:08:06.409 user 0m0.067s 00:08:06.409 sys 0m0.017s 00:08:06.409 10:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.409 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:06.409 ************************************ 00:08:06.409 END TEST unittest_event 00:08:06.409 ************************************ 00:08:06.409 10:33:33 -- unit/unittest.sh@233 -- # uname -s 00:08:06.409 10:33:33 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:08:06.409 10:33:33 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:08:06.409 10:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.409 10:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.409 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:06.409 ************************************ 00:08:06.409 START TEST unittest_ftl 00:08:06.409 ************************************ 00:08:06.409 10:33:33 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:08:06.409 10:33:33 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:06.409 00:08:06.409 00:08:06.409 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.409 http://cunit.sourceforge.net/ 00:08:06.409 00:08:06.409 00:08:06.409 Suite: ftl_band_suite 00:08:06.668 Test: test_band_block_offset_from_addr_base ...passed 00:08:06.668 Test: test_band_block_offset_from_addr_offset ...passed 00:08:06.668 Test: test_band_addr_from_block_offset ...passed 00:08:06.668 Test: test_band_set_addr ...passed 00:08:06.668 Test: test_invalidate_addr ...passed 00:08:06.668 Test: test_next_xfer_addr ...passed 00:08:06.668 00:08:06.668 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.668 suites 1 1 n/a 0 0 00:08:06.668 tests 6 6 6 0 0 00:08:06.668 asserts 30356 30356 30356 0 n/a 00:08:06.668 00:08:06.668 Elapsed time = 0.181 seconds 00:08:06.668 10:33:33 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:06.668 00:08:06.668 00:08:06.668 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.668 http://cunit.sourceforge.net/ 00:08:06.668 00:08:06.668 00:08:06.668 Suite: ftl_bitmap 00:08:06.668 Test: test_ftl_bitmap_create ...[2024-07-24 10:33:33.336415] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:06.668 [2024-07-24 10:33:33.336839] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:06.668 passed 00:08:06.668 Test: test_ftl_bitmap_get ...passed 00:08:06.668 Test: test_ftl_bitmap_set ...passed 00:08:06.668 Test: test_ftl_bitmap_clear ...passed 00:08:06.668 Test: test_ftl_bitmap_find_first_set ...passed 00:08:06.668 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:06.668 Test: test_ftl_bitmap_count_set ...passed 00:08:06.668 00:08:06.668 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.668 suites 1 1 n/a 0 0 00:08:06.668 tests 7 7 7 0 0 00:08:06.668 asserts 137 137 137 0 n/a 00:08:06.668 00:08:06.668 Elapsed time = 0.001 seconds 00:08:06.950 10:33:33 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: ftl_io_suite 00:08:06.951 Test: test_completion ...passed 00:08:06.951 Test: test_multiple_ios ...passed 00:08:06.951 00:08:06.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.951 suites 1 1 n/a 0 0 00:08:06.951 tests 2 2 2 0 0 00:08:06.951 asserts 47 47 47 0 n/a 00:08:06.951 00:08:06.951 Elapsed time = 0.003 seconds 00:08:06.951 10:33:33 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: ftl_mngt 00:08:06.951 Test: test_next_step ...passed 00:08:06.951 Test: test_continue_step ...passed 00:08:06.951 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:06.951 Test: test_fail_step ...passed 00:08:06.951 Test: test_mngt_call_and_call_rollback ...passed 00:08:06.951 Test: test_nested_process_failure ...passed 00:08:06.951 00:08:06.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.951 suites 1 1 n/a 0 0 00:08:06.951 tests 6 6 6 0 0 00:08:06.951 asserts 176 176 176 0 n/a 00:08:06.951 00:08:06.951 Elapsed time = 0.001 seconds 00:08:06.951 10:33:33 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: ftl_mempool 00:08:06.951 Test: test_ftl_mempool_create ...passed 00:08:06.951 Test: test_ftl_mempool_get_put ...passed 00:08:06.951 00:08:06.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.951 suites 1 1 n/a 0 0 00:08:06.951 tests 2 2 2 0 0 00:08:06.951 asserts 36 36 36 0 n/a 00:08:06.951 00:08:06.951 Elapsed time = 0.000 seconds 00:08:06.951 10:33:33 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: ftl_addr64_suite 00:08:06.951 Test: test_addr_cached ...passed 00:08:06.951 00:08:06.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.951 suites 1 1 n/a 0 0 00:08:06.951 tests 1 1 1 0 0 00:08:06.951 asserts 1536 1536 1536 0 n/a 00:08:06.951 00:08:06.951 Elapsed time = 0.000 seconds 00:08:06.951 10:33:33 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: ftl_sb 00:08:06.951 Test: test_sb_crc_v2 ...passed 00:08:06.951 Test: test_sb_crc_v3 ...passed 00:08:06.951 Test: test_sb_v3_md_layout ...[2024-07-24 10:33:33.480381] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:06.951 [2024-07-24 10:33:33.480919] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:06.951 [2024-07-24 10:33:33.481012] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:06.951 [2024-07-24 10:33:33.481075] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:06.951 [2024-07-24 10:33:33.481126] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:06.951 [2024-07-24 10:33:33.481253] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:06.951 [2024-07-24 10:33:33.481307] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:06.951 [2024-07-24 10:33:33.481387] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:06.951 [2024-07-24 10:33:33.481495] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:06.951 [2024-07-24 10:33:33.481573] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:06.951 passed 00:08:06.951 Test: test_sb_v5_md_layout ...[2024-07-24 10:33:33.481641] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:06.951 passed 00:08:06.951 00:08:06.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.951 suites 1 1 n/a 0 0 00:08:06.951 tests 4 4 4 0 0 00:08:06.951 asserts 148 148 148 0 n/a 00:08:06.951 00:08:06.951 Elapsed time = 0.003 seconds 00:08:06.951 10:33:33 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: ftl_layout_upgrade 00:08:06.951 Test: test_l2p_upgrade ...passed 00:08:06.951 00:08:06.951 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.951 suites 1 1 n/a 0 0 00:08:06.951 tests 1 1 1 0 0 00:08:06.951 asserts 140 140 140 0 n/a 00:08:06.951 00:08:06.951 Elapsed time = 0.001 seconds 00:08:06.951 00:08:06.951 real 0m0.458s 00:08:06.951 user 0m0.202s 00:08:06.951 sys 0m0.259s 00:08:06.951 10:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.951 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:06.951 ************************************ 00:08:06.951 END TEST unittest_ftl 00:08:06.951 ************************************ 00:08:06.951 10:33:33 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:06.951 10:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.951 10:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.951 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:06.951 ************************************ 00:08:06.951 START TEST unittest_accel 00:08:06.951 ************************************ 00:08:06.951 10:33:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:06.951 00:08:06.951 00:08:06.951 CUnit - A unit testing framework for C - Version 2.1-3 00:08:06.951 http://cunit.sourceforge.net/ 00:08:06.951 00:08:06.951 00:08:06.951 Suite: accel_sequence 00:08:06.951 Test: test_sequence_fill_copy ...passed 00:08:06.951 Test: test_sequence_abort ...passed 00:08:06.951 Test: test_sequence_append_error ...passed 00:08:06.951 Test: test_sequence_completion_error ...[2024-07-24 10:33:33.588985] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f1af6f9c7c0 00:08:06.951 [2024-07-24 10:33:33.589423] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f1af6f9c7c0 00:08:06.951 [2024-07-24 10:33:33.589508] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f1af6f9c7c0 00:08:06.951 [2024-07-24 10:33:33.589597] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f1af6f9c7c0 00:08:06.951 passed 00:08:06.951 Test: test_sequence_decompress ...passed 00:08:06.951 Test: test_sequence_reverse ...passed 00:08:06.952 Test: test_sequence_copy_elision ...passed 00:08:06.952 Test: test_sequence_accel_buffers ...passed 00:08:06.952 Test: test_sequence_memory_domain ...[2024-07-24 10:33:33.601059] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:06.952 [2024-07-24 10:33:33.601240] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:06.952 passed 00:08:06.952 Test: test_sequence_module_memory_domain ...passed 00:08:06.952 Test: test_sequence_crypto ...passed 00:08:06.952 Test: test_sequence_driver ...[2024-07-24 10:33:33.608749] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f1af63747c0 using driver: ut 00:08:06.952 [2024-07-24 10:33:33.608903] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f1af63747c0 through driver: ut 00:08:06.952 passed 00:08:06.952 Test: test_sequence_same_iovs ...passed 00:08:06.952 Test: test_sequence_crc32 ...passed 00:08:06.952 Suite: accel 00:08:06.952 Test: test_spdk_accel_task_complete ...passed 00:08:06.952 Test: test_get_task ...passed 00:08:06.952 Test: test_spdk_accel_submit_copy ...passed 00:08:06.952 Test: test_spdk_accel_submit_dualcast ...[2024-07-24 10:33:33.614467] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:06.952 passed 00:08:06.952 Test: test_spdk_accel_submit_compare ...[2024-07-24 10:33:33.614560] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:06.952 passed 00:08:06.952 Test: test_spdk_accel_submit_fill ...passed 00:08:06.952 Test: test_spdk_accel_submit_crc32c ...passed 00:08:06.952 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:06.952 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:06.952 Test: test_spdk_accel_submit_xor ...passed 00:08:06.952 Test: test_spdk_accel_module_find_by_name ...passed 00:08:06.952 Test: test_spdk_accel_module_register ...passed 00:08:06.952 00:08:06.952 Run Summary: Type Total Ran Passed Failed Inactive 00:08:06.952 suites 2 2 n/a 0 0 00:08:06.952 tests 26 26 26 0 0 00:08:06.952 asserts 831 831 831 0 n/a 00:08:06.952 00:08:06.952 Elapsed time = 0.038 seconds 00:08:07.210 00:08:07.210 real 0m0.074s 00:08:07.210 user 0m0.039s 00:08:07.210 sys 0m0.035s 00:08:07.210 10:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.210 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 ************************************ 00:08:07.210 END TEST unittest_accel 00:08:07.210 ************************************ 00:08:07.210 10:33:33 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:07.210 10:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.210 10:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.210 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 ************************************ 00:08:07.210 START TEST unittest_ioat 00:08:07.210 ************************************ 00:08:07.210 10:33:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:07.210 00:08:07.210 00:08:07.210 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.210 http://cunit.sourceforge.net/ 00:08:07.210 00:08:07.210 00:08:07.210 Suite: ioat 00:08:07.210 Test: ioat_state_check ...passed 00:08:07.210 00:08:07.210 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.210 suites 1 1 n/a 0 0 00:08:07.210 tests 1 1 1 0 0 00:08:07.210 asserts 32 32 32 0 n/a 00:08:07.210 00:08:07.210 Elapsed time = 0.000 seconds 00:08:07.210 00:08:07.210 real 0m0.029s 00:08:07.210 user 0m0.008s 00:08:07.210 sys 0m0.020s 00:08:07.210 10:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.210 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 ************************************ 00:08:07.210 END TEST unittest_ioat 00:08:07.210 ************************************ 00:08:07.210 10:33:33 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:07.210 10:33:33 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:07.210 10:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.210 10:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.210 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 ************************************ 00:08:07.210 START TEST unittest_idxd_user 00:08:07.210 ************************************ 00:08:07.210 10:33:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:07.210 00:08:07.210 00:08:07.210 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.210 http://cunit.sourceforge.net/ 00:08:07.210 00:08:07.210 00:08:07.210 Suite: idxd_user 00:08:07.210 Test: test_idxd_wait_cmd ...[2024-07-24 10:33:33.777149] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:07.210 passed 00:08:07.210 Test: test_idxd_reset_dev ...[2024-07-24 10:33:33.777432] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:07.210 [2024-07-24 10:33:33.777557] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:07.210 passed 00:08:07.210 Test: test_idxd_group_config ...passed 00:08:07.210 Test: test_idxd_wq_config ...passed 00:08:07.210 00:08:07.210 [2024-07-24 10:33:33.777600] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:07.210 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.210 suites 1 1 n/a 0 0 00:08:07.210 tests 4 4 4 0 0 00:08:07.210 asserts 20 20 20 0 n/a 00:08:07.210 00:08:07.210 Elapsed time = 0.001 seconds 00:08:07.210 00:08:07.210 real 0m0.033s 00:08:07.210 user 0m0.021s 00:08:07.210 sys 0m0.012s 00:08:07.210 10:33:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.210 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 ************************************ 00:08:07.210 END TEST unittest_idxd_user 00:08:07.210 ************************************ 00:08:07.210 10:33:33 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:08:07.210 10:33:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.210 10:33:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.210 10:33:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 ************************************ 00:08:07.210 START TEST unittest_iscsi 00:08:07.210 ************************************ 00:08:07.210 10:33:33 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:08:07.210 10:33:33 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:07.211 00:08:07.211 00:08:07.211 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.211 http://cunit.sourceforge.net/ 00:08:07.211 00:08:07.211 00:08:07.211 Suite: conn_suite 00:08:07.211 Test: read_task_split_in_order_case ...passed 00:08:07.211 Test: read_task_split_reverse_order_case ...passed 00:08:07.211 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:07.211 Test: process_non_read_task_completion_test ...passed 00:08:07.211 Test: free_tasks_on_connection ...passed 00:08:07.211 Test: free_tasks_with_queued_datain ...passed 00:08:07.211 Test: abort_queued_datain_task_test ...passed 00:08:07.211 Test: abort_queued_datain_tasks_test ...passed 00:08:07.211 00:08:07.211 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.211 suites 1 1 n/a 0 0 00:08:07.211 tests 8 8 8 0 0 00:08:07.211 asserts 230 230 230 0 n/a 00:08:07.211 00:08:07.211 Elapsed time = 0.000 seconds 00:08:07.211 10:33:33 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:07.470 00:08:07.470 00:08:07.470 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.470 http://cunit.sourceforge.net/ 00:08:07.470 00:08:07.470 00:08:07.470 Suite: iscsi_suite 00:08:07.470 Test: param_negotiation_test ...passed 00:08:07.470 Test: list_negotiation_test ...passed 00:08:07.470 Test: parse_valid_test ...passed 00:08:07.470 Test: parse_invalid_test ...[2024-07-24 10:33:33.897668] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:07.470 [2024-07-24 10:33:33.898032] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:08:07.470 [2024-07-24 10:33:33.898106] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:08:07.470 [2024-07-24 10:33:33.898181] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:07.470 [2024-07-24 10:33:33.898332] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:07.470 [2024-07-24 10:33:33.898421] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:07.470 [2024-07-24 10:33:33.898559] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:07.470 passed 00:08:07.470 00:08:07.470 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.470 suites 1 1 n/a 0 0 00:08:07.470 tests 4 4 4 0 0 00:08:07.470 asserts 161 161 161 0 n/a 00:08:07.470 00:08:07.470 Elapsed time = 0.005 seconds 00:08:07.470 10:33:33 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:07.470 00:08:07.470 00:08:07.470 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.470 http://cunit.sourceforge.net/ 00:08:07.470 00:08:07.470 00:08:07.470 Suite: iscsi_target_node_suite 00:08:07.470 Test: add_lun_test_cases ...[2024-07-24 10:33:33.931259] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:07.470 [2024-07-24 10:33:33.931752] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:07.470 [2024-07-24 10:33:33.931863] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:07.470 [2024-07-24 10:33:33.931920] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:07.470 [2024-07-24 10:33:33.931959] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:07.470 passed 00:08:07.470 Test: allow_any_allowed ...passed 00:08:07.470 Test: allow_ipv6_allowed ...passed 00:08:07.470 Test: allow_ipv6_denied ...passed 00:08:07.470 Test: allow_ipv6_invalid ...passed 00:08:07.470 Test: allow_ipv4_allowed ...passed 00:08:07.470 Test: allow_ipv4_denied ...passed 00:08:07.470 Test: allow_ipv4_invalid ...passed 00:08:07.470 Test: node_access_allowed ...passed 00:08:07.470 Test: node_access_denied_by_empty_netmask ...passed 00:08:07.470 Test: node_access_multi_initiator_groups_cases ...passed 00:08:07.470 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:07.470 Test: chap_param_test_cases ...[2024-07-24 10:33:33.932412] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:07.470 [2024-07-24 10:33:33.932472] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:07.470 [2024-07-24 10:33:33.932538] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:07.471 [2024-07-24 10:33:33.932595] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:07.471 passed 00:08:07.471 00:08:07.471 [2024-07-24 10:33:33.932647] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:07.471 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.471 suites 1 1 n/a 0 0 00:08:07.471 tests 13 13 13 0 0 00:08:07.471 asserts 50 50 50 0 n/a 00:08:07.471 00:08:07.471 Elapsed time = 0.001 seconds 00:08:07.471 10:33:33 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:07.471 00:08:07.471 00:08:07.471 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.471 http://cunit.sourceforge.net/ 00:08:07.471 00:08:07.471 00:08:07.471 Suite: iscsi_suite 00:08:07.471 Test: op_login_check_target_test ...[2024-07-24 10:33:33.969335] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:08:07.471 passed 00:08:07.471 Test: op_login_session_normal_test ...[2024-07-24 10:33:33.969762] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:07.471 [2024-07-24 10:33:33.969824] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:07.471 [2024-07-24 10:33:33.969874] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:07.471 [2024-07-24 10:33:33.969927] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:07.471 [2024-07-24 10:33:33.970038] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:07.471 [2024-07-24 10:33:33.970154] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:07.471 [2024-07-24 10:33:33.970218] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:07.471 passed 00:08:07.471 Test: maxburstlength_test ...[2024-07-24 10:33:33.970508] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:07.471 [2024-07-24 10:33:33.970581] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:07.471 passed 00:08:07.471 Test: underflow_for_read_transfer_test ...passed 00:08:07.471 Test: underflow_for_zero_read_transfer_test ...passed 00:08:07.471 Test: underflow_for_request_sense_test ...passed 00:08:07.471 Test: underflow_for_check_condition_test ...passed 00:08:07.471 Test: add_transfer_task_test ...passed 00:08:07.471 Test: get_transfer_task_test ...passed 00:08:07.471 Test: del_transfer_task_test ...passed 00:08:07.471 Test: clear_all_transfer_tasks_test ...passed 00:08:07.471 Test: build_iovs_test ...passed 00:08:07.471 Test: build_iovs_with_md_test ...passed 00:08:07.471 Test: pdu_hdr_op_login_test ...[2024-07-24 10:33:33.972192] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:07.471 [2024-07-24 10:33:33.972322] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:07.471 [2024-07-24 10:33:33.972419] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:07.471 passed 00:08:07.471 Test: pdu_hdr_op_text_test ...[2024-07-24 10:33:33.972539] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:07.471 [2024-07-24 10:33:33.972656] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:07.471 [2024-07-24 10:33:33.972709] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:07.471 passed 00:08:07.471 Test: pdu_hdr_op_logout_test ...passed 00:08:07.471 Test: pdu_hdr_op_scsi_test ...[2024-07-24 10:33:33.972814] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:07.471 [2024-07-24 10:33:33.972986] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:07.471 [2024-07-24 10:33:33.973044] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:07.471 [2024-07-24 10:33:33.973108] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:07.471 [2024-07-24 10:33:33.973233] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:07.471 [2024-07-24 10:33:33.973338] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:07.471 [2024-07-24 10:33:33.973525] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:07.471 passed 00:08:07.471 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-24 10:33:33.973646] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:07.471 [2024-07-24 10:33:33.973736] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:07.471 passed 00:08:07.471 Test: pdu_hdr_op_nopout_test ...[2024-07-24 10:33:33.973986] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:07.471 [2024-07-24 10:33:33.974081] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:07.471 [2024-07-24 10:33:33.974120] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:07.471 [2024-07-24 10:33:33.974158] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:07.471 passed 00:08:07.471 Test: pdu_hdr_op_data_test ...[2024-07-24 10:33:33.974201] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:07.471 [2024-07-24 10:33:33.974261] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:07.471 [2024-07-24 10:33:33.974342] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:07.471 [2024-07-24 10:33:33.974419] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:07.471 [2024-07-24 10:33:33.974498] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:07.471 [2024-07-24 10:33:33.974601] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:07.471 passed 00:08:07.471 Test: empty_text_with_cbit_test ...[2024-07-24 10:33:33.974652] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:07.471 passed 00:08:07.471 Test: pdu_payload_read_test ...[2024-07-24 10:33:33.976872] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:07.471 passed 00:08:07.471 Test: data_out_pdu_sequence_test ...passed 00:08:07.471 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:07.471 00:08:07.471 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.471 suites 1 1 n/a 0 0 00:08:07.471 tests 24 24 24 0 0 00:08:07.471 asserts 150253 150253 150253 0 n/a 00:08:07.471 00:08:07.471 Elapsed time = 0.018 seconds 00:08:07.471 10:33:34 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:07.471 00:08:07.471 00:08:07.471 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.471 http://cunit.sourceforge.net/ 00:08:07.471 00:08:07.471 00:08:07.471 Suite: init_grp_suite 00:08:07.471 Test: create_initiator_group_success_case ...passed 00:08:07.471 Test: find_initiator_group_success_case ...passed 00:08:07.471 Test: register_initiator_group_twice_case ...passed 00:08:07.471 Test: add_initiator_name_success_case ...passed 00:08:07.471 Test: add_initiator_name_fail_case ...passed 00:08:07.471 Test: delete_all_initiator_names_success_case ...[2024-07-24 10:33:34.014044] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:07.471 passed 00:08:07.471 Test: add_netmask_success_case ...passed 00:08:07.471 Test: add_netmask_fail_case ...[2024-07-24 10:33:34.014477] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:07.471 passed 00:08:07.471 Test: delete_all_netmasks_success_case ...passed 00:08:07.471 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:07.471 Test: netmask_overwrite_all_to_any_case ...passed 00:08:07.471 Test: add_delete_initiator_names_case ...passed 00:08:07.471 Test: add_duplicated_initiator_names_case ...passed 00:08:07.471 Test: delete_nonexisting_initiator_names_case ...passed 00:08:07.471 Test: add_delete_netmasks_case ...passed 00:08:07.471 Test: add_duplicated_netmasks_case ...passed 00:08:07.471 Test: delete_nonexisting_netmasks_case ...passed 00:08:07.471 00:08:07.471 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.471 suites 1 1 n/a 0 0 00:08:07.471 tests 17 17 17 0 0 00:08:07.471 asserts 108 108 108 0 n/a 00:08:07.471 00:08:07.471 Elapsed time = 0.001 seconds 00:08:07.471 10:33:34 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:07.471 00:08:07.471 00:08:07.471 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.471 http://cunit.sourceforge.net/ 00:08:07.471 00:08:07.471 00:08:07.471 Suite: portal_grp_suite 00:08:07.471 Test: portal_create_ipv4_normal_case ...passed 00:08:07.471 Test: portal_create_ipv6_normal_case ...passed 00:08:07.471 Test: portal_create_ipv4_wildcard_case ...passed 00:08:07.471 Test: portal_create_ipv6_wildcard_case ...passed 00:08:07.471 Test: portal_create_twice_case ...[2024-07-24 10:33:34.045866] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:07.472 passed 00:08:07.472 Test: portal_grp_register_unregister_case ...passed 00:08:07.472 Test: portal_grp_register_twice_case ...passed 00:08:07.472 Test: portal_grp_add_delete_case ...passed 00:08:07.472 Test: portal_grp_add_delete_twice_case ...passed 00:08:07.472 00:08:07.472 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.472 suites 1 1 n/a 0 0 00:08:07.472 tests 9 9 9 0 0 00:08:07.472 asserts 44 44 44 0 n/a 00:08:07.472 00:08:07.472 Elapsed time = 0.004 seconds 00:08:07.472 00:08:07.472 real 0m0.224s 00:08:07.472 user 0m0.118s 00:08:07.472 sys 0m0.109s 00:08:07.472 10:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.472 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 ************************************ 00:08:07.472 END TEST unittest_iscsi 00:08:07.472 ************************************ 00:08:07.472 10:33:34 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:08:07.472 10:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.472 10:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.472 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.472 ************************************ 00:08:07.472 START TEST unittest_json 00:08:07.472 ************************************ 00:08:07.472 10:33:34 -- common/autotest_common.sh@1104 -- # unittest_json 00:08:07.472 10:33:34 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:07.472 00:08:07.472 00:08:07.472 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.472 http://cunit.sourceforge.net/ 00:08:07.472 00:08:07.472 00:08:07.472 Suite: json 00:08:07.472 Test: test_parse_literal ...passed 00:08:07.472 Test: test_parse_string_simple ...passed 00:08:07.472 Test: test_parse_string_control_chars ...passed 00:08:07.472 Test: test_parse_string_utf8 ...passed 00:08:07.472 Test: test_parse_string_escapes_twochar ...passed 00:08:07.472 Test: test_parse_string_escapes_unicode ...passed 00:08:07.472 Test: test_parse_number ...passed 00:08:07.472 Test: test_parse_array ...passed 00:08:07.472 Test: test_parse_object ...passed 00:08:07.472 Test: test_parse_nesting ...passed 00:08:07.472 Test: test_parse_comment ...passed 00:08:07.472 00:08:07.472 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.472 suites 1 1 n/a 0 0 00:08:07.472 tests 11 11 11 0 0 00:08:07.472 asserts 1516 1516 1516 0 n/a 00:08:07.472 00:08:07.472 Elapsed time = 0.002 seconds 00:08:07.472 10:33:34 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:07.731 00:08:07.731 00:08:07.731 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.731 http://cunit.sourceforge.net/ 00:08:07.731 00:08:07.731 00:08:07.731 Suite: json 00:08:07.731 Test: test_strequal ...passed 00:08:07.731 Test: test_num_to_uint16 ...passed 00:08:07.731 Test: test_num_to_int32 ...passed 00:08:07.731 Test: test_num_to_uint64 ...passed 00:08:07.731 Test: test_decode_object ...passed 00:08:07.731 Test: test_decode_array ...passed 00:08:07.731 Test: test_decode_bool ...passed 00:08:07.731 Test: test_decode_uint16 ...passed 00:08:07.731 Test: test_decode_int32 ...passed 00:08:07.731 Test: test_decode_uint32 ...passed 00:08:07.731 Test: test_decode_uint64 ...passed 00:08:07.731 Test: test_decode_string ...passed 00:08:07.731 Test: test_decode_uuid ...passed 00:08:07.731 Test: test_find ...passed 00:08:07.731 Test: test_find_array ...passed 00:08:07.731 Test: test_iterating ...passed 00:08:07.731 Test: test_free_object ...passed 00:08:07.731 00:08:07.731 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.731 suites 1 1 n/a 0 0 00:08:07.731 tests 17 17 17 0 0 00:08:07.731 asserts 236 236 236 0 n/a 00:08:07.731 00:08:07.731 Elapsed time = 0.001 seconds 00:08:07.731 10:33:34 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:07.731 00:08:07.731 00:08:07.731 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.731 http://cunit.sourceforge.net/ 00:08:07.731 00:08:07.731 00:08:07.731 Suite: json 00:08:07.731 Test: test_write_literal ...passed 00:08:07.731 Test: test_write_string_simple ...passed 00:08:07.731 Test: test_write_string_escapes ...passed 00:08:07.731 Test: test_write_string_utf16le ...passed 00:08:07.731 Test: test_write_number_int32 ...passed 00:08:07.731 Test: test_write_number_uint32 ...passed 00:08:07.731 Test: test_write_number_uint128 ...passed 00:08:07.731 Test: test_write_string_number_uint128 ...passed 00:08:07.731 Test: test_write_number_int64 ...passed 00:08:07.731 Test: test_write_number_uint64 ...passed 00:08:07.731 Test: test_write_number_double ...passed 00:08:07.731 Test: test_write_uuid ...passed 00:08:07.731 Test: test_write_array ...passed 00:08:07.731 Test: test_write_object ...passed 00:08:07.731 Test: test_write_nesting ...passed 00:08:07.731 Test: test_write_val ...passed 00:08:07.731 00:08:07.731 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.731 suites 1 1 n/a 0 0 00:08:07.731 tests 16 16 16 0 0 00:08:07.731 asserts 918 918 918 0 n/a 00:08:07.731 00:08:07.731 Elapsed time = 0.006 seconds 00:08:07.731 10:33:34 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:07.731 00:08:07.731 00:08:07.731 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.731 http://cunit.sourceforge.net/ 00:08:07.731 00:08:07.731 00:08:07.731 Suite: jsonrpc 00:08:07.731 Test: test_parse_request ...passed 00:08:07.731 Test: test_parse_request_streaming ...passed 00:08:07.731 00:08:07.731 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.731 suites 1 1 n/a 0 0 00:08:07.731 tests 2 2 2 0 0 00:08:07.731 asserts 289 289 289 0 n/a 00:08:07.731 00:08:07.731 Elapsed time = 0.003 seconds 00:08:07.731 00:08:07.731 real 0m0.132s 00:08:07.731 user 0m0.080s 00:08:07.731 sys 0m0.053s 00:08:07.731 10:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.731 ************************************ 00:08:07.731 END TEST unittest_json 00:08:07.731 ************************************ 00:08:07.731 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.731 10:33:34 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:08:07.731 10:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.731 10:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.731 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.731 ************************************ 00:08:07.731 START TEST unittest_rpc 00:08:07.731 ************************************ 00:08:07.731 10:33:34 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:08:07.731 10:33:34 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:07.731 00:08:07.731 00:08:07.731 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.731 http://cunit.sourceforge.net/ 00:08:07.731 00:08:07.731 00:08:07.731 Suite: rpc 00:08:07.731 Test: test_jsonrpc_handler ...passed 00:08:07.731 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:07.731 Test: test_rpc_get_methods ...[2024-07-24 10:33:34.303615] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:07.731 passed 00:08:07.731 Test: test_rpc_spdk_get_version ...passed 00:08:07.731 Test: test_spdk_rpc_listen_close ...passed 00:08:07.731 00:08:07.731 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.731 suites 1 1 n/a 0 0 00:08:07.731 tests 5 5 5 0 0 00:08:07.731 asserts 20 20 20 0 n/a 00:08:07.731 00:08:07.731 Elapsed time = 0.000 seconds 00:08:07.731 00:08:07.731 real 0m0.028s 00:08:07.731 user 0m0.016s 00:08:07.731 sys 0m0.012s 00:08:07.731 10:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.731 ************************************ 00:08:07.731 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.731 END TEST unittest_rpc 00:08:07.731 ************************************ 00:08:07.731 10:33:34 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:07.731 10:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.731 10:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.731 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.731 ************************************ 00:08:07.731 START TEST unittest_notify 00:08:07.731 ************************************ 00:08:07.731 10:33:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:07.731 00:08:07.731 00:08:07.731 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.731 http://cunit.sourceforge.net/ 00:08:07.731 00:08:07.732 00:08:07.732 Suite: app_suite 00:08:07.732 Test: notify ...passed 00:08:07.732 00:08:07.732 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.732 suites 1 1 n/a 0 0 00:08:07.732 tests 1 1 1 0 0 00:08:07.732 asserts 13 13 13 0 n/a 00:08:07.732 00:08:07.732 Elapsed time = 0.000 seconds 00:08:07.732 00:08:07.732 real 0m0.031s 00:08:07.732 user 0m0.018s 00:08:07.732 sys 0m0.013s 00:08:07.732 10:33:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.732 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.732 ************************************ 00:08:07.732 END TEST unittest_notify 00:08:07.732 ************************************ 00:08:07.991 10:33:34 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:08:07.991 10:33:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.991 10:33:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.991 10:33:34 -- common/autotest_common.sh@10 -- # set +x 00:08:07.991 ************************************ 00:08:07.991 START TEST unittest_nvme 00:08:07.991 ************************************ 00:08:07.991 10:33:34 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:08:07.991 10:33:34 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:07.991 00:08:07.991 00:08:07.991 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.991 http://cunit.sourceforge.net/ 00:08:07.991 00:08:07.991 00:08:07.991 Suite: nvme 00:08:07.991 Test: test_opc_data_transfer ...passed 00:08:07.991 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:07.991 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:07.991 Test: test_trid_parse_and_compare ...[2024-07-24 10:33:34.461751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:07.991 [2024-07-24 10:33:34.462960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:07.991 [2024-07-24 10:33:34.463129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:07.991 [2024-07-24 10:33:34.463184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:07.991 [2024-07-24 10:33:34.463550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:08:07.991 [2024-07-24 10:33:34.463976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:07.991 passed 00:08:07.991 Test: test_trid_trtype_str ...passed 00:08:07.991 Test: test_trid_adrfam_str ...passed 00:08:07.991 Test: test_nvme_ctrlr_probe ...[2024-07-24 10:33:34.464378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:07.991 passed 00:08:07.991 Test: test_spdk_nvme_probe ...[2024-07-24 10:33:34.464653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:07.991 [2024-07-24 10:33:34.465006] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:07.991 [2024-07-24 10:33:34.465406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:07.991 passed 00:08:07.991 Test: test_spdk_nvme_connect ...[2024-07-24 10:33:34.465517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:07.991 [2024-07-24 10:33:34.465673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:07.991 [2024-07-24 10:33:34.466873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:07.991 passed 00:08:07.991 Test: test_nvme_ctrlr_probe_internal ...[2024-07-24 10:33:34.466988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:08:07.991 [2024-07-24 10:33:34.467614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:07.991 [2024-07-24 10:33:34.467702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:07.991 passed 00:08:07.991 Test: test_nvme_init_controllers ...[2024-07-24 10:33:34.468065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:07.991 passed 00:08:07.991 Test: test_nvme_driver_init ...[2024-07-24 10:33:34.468442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:07.991 [2024-07-24 10:33:34.468502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:07.991 [2024-07-24 10:33:34.578963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:07.991 passed 00:08:07.991 Test: test_spdk_nvme_detach ...passed 00:08:07.991 Test: test_nvme_completion_poll_cb ...[2024-07-24 10:33:34.579219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:07.991 passed 00:08:07.991 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:07.991 Test: test_nvme_allocate_request_null ...passed 00:08:07.991 Test: test_nvme_allocate_request ...passed 00:08:07.991 Test: test_nvme_free_request ...passed 00:08:07.991 Test: test_nvme_allocate_request_user_copy ...passed 00:08:07.991 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:07.991 Test: test_nvme_request_check_timeout ...passed 00:08:07.991 Test: test_nvme_wait_for_completion ...passed 00:08:07.991 Test: test_spdk_nvme_parse_func ...passed 00:08:07.991 Test: test_spdk_nvme_detach_async ...passed 00:08:07.991 Test: test_nvme_parse_addr ...[2024-07-24 10:33:34.581097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:07.991 passed 00:08:07.991 00:08:07.991 Run Summary: Type Total Ran Passed Failed Inactive 00:08:07.991 suites 1 1 n/a 0 0 00:08:07.991 tests 25 25 25 0 0 00:08:07.991 asserts 326 326 326 0 n/a 00:08:07.991 00:08:07.991 Elapsed time = 0.013 seconds 00:08:07.991 10:33:34 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:07.991 00:08:07.991 00:08:07.991 CUnit - A unit testing framework for C - Version 2.1-3 00:08:07.991 http://cunit.sourceforge.net/ 00:08:07.991 00:08:07.991 00:08:07.991 Suite: nvme_ctrlr 00:08:07.991 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-24 10:33:34.613865] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.991 passed 00:08:07.991 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-24 10:33:34.615825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.991 passed 00:08:07.991 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-24 10:33:34.617264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.991 passed 00:08:07.991 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-24 10:33:34.618666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.991 passed 00:08:07.991 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-24 10:33:34.620028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.991 [2024-07-24 10:33:34.621291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 10:33:34.622621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 10:33:34.623851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:07.991 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-24 10:33:34.626394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.992 [2024-07-24 10:33:34.628835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 10:33:34.630148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:07.992 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-24 10:33:34.632827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.992 [2024-07-24 10:33:34.634120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-24 10:33:34.636516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:07.992 Test: test_nvme_ctrlr_init_delay ...[2024-07-24 10:33:34.639033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.992 passed 00:08:07.992 Test: test_alloc_io_qpair_rr_1 ...[2024-07-24 10:33:34.640408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.992 [2024-07-24 10:33:34.640590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:07.992 [2024-07-24 10:33:34.640891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:07.992 [2024-07-24 10:33:34.641016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:07.992 passed 00:08:07.992 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-07-24 10:33:34.641124] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:07.992 passed 00:08:07.992 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:07.992 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-24 10:33:34.641370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.992 passed 00:08:07.992 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-24 10:33:34.641619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:07.992 [2024-07-24 10:33:34.641789] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:07.992 passed 00:08:07.992 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-24 10:33:34.642118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:07.992 [2024-07-24 10:33:34.642375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:07.992 [2024-07-24 10:33:34.642531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:07.992 [2024-07-24 10:33:34.642654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:07.992 passed 00:08:07.992 Test: test_nvme_ctrlr_fail ...[2024-07-24 10:33:34.642772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:07.992 passed 00:08:07.992 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:07.992 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:07.992 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:07.992 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-24 10:33:34.643144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:08.559 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:08.559 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:08.559 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-24 10:33:34.985901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-24 10:33:34.993072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-24 10:33:34.994371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 [2024-07-24 10:33:34.994478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:08.559 passed 00:08:08.559 Test: test_alloc_io_qpair_fail ...[2024-07-24 10:33:34.995662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_add_remove_process ...[2024-07-24 10:33:34.995783] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:08:08.559 Test: test_nvme_ctrlr_set_state ...passed 00:08:08.559 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-24 10:33:34.995933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:08.559 [2024-07-24 10:33:34.995973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-24 10:33:35.017870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-24 10:33:35.056651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_reset ...[2024-07-24 10:33:35.058280] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_aer_callback ...[2024-07-24 10:33:35.058652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-24 10:33:35.060085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:08.559 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:08.559 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-24 10:33:35.061826] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:08.559 Test: test_nvme_ctrlr_ana_resize ...[2024-07-24 10:33:35.063225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.559 passed 00:08:08.559 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:08.559 Test: test_nvme_transport_ctrlr_ready ...[2024-07-24 10:33:35.064786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:08.560 passed 00:08:08.560 Test: test_nvme_ctrlr_disable ...[2024-07-24 10:33:35.064848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:08:08.560 [2024-07-24 10:33:35.064903] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:08.560 passed 00:08:08.560 00:08:08.560 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.560 suites 1 1 n/a 0 0 00:08:08.560 tests 43 43 43 0 0 00:08:08.560 asserts 10418 10418 10418 0 n/a 00:08:08.560 00:08:08.560 Elapsed time = 0.411 seconds 00:08:08.560 10:33:35 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:08.560 00:08:08.560 00:08:08.560 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.560 http://cunit.sourceforge.net/ 00:08:08.560 00:08:08.560 00:08:08.560 Suite: nvme_ctrlr_cmd 00:08:08.560 Test: test_get_log_pages ...passed 00:08:08.560 Test: test_set_feature_cmd ...passed 00:08:08.560 Test: test_set_feature_ns_cmd ...passed 00:08:08.560 Test: test_get_feature_cmd ...passed 00:08:08.560 Test: test_get_feature_ns_cmd ...passed 00:08:08.560 Test: test_abort_cmd ...passed 00:08:08.560 Test: test_set_host_id_cmds ...[2024-07-24 10:33:35.114498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:08.560 passed 00:08:08.560 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:08.560 Test: test_io_raw_cmd ...passed 00:08:08.560 Test: test_io_raw_cmd_with_md ...passed 00:08:08.560 Test: test_namespace_attach ...passed 00:08:08.560 Test: test_namespace_detach ...passed 00:08:08.560 Test: test_namespace_create ...passed 00:08:08.560 Test: test_namespace_delete ...passed 00:08:08.560 Test: test_doorbell_buffer_config ...passed 00:08:08.560 Test: test_format_nvme ...passed 00:08:08.560 Test: test_fw_commit ...passed 00:08:08.560 Test: test_fw_image_download ...passed 00:08:08.560 Test: test_sanitize ...passed 00:08:08.560 Test: test_directive ...passed 00:08:08.560 Test: test_nvme_request_add_abort ...passed 00:08:08.560 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:08.560 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:08.560 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:08.560 00:08:08.560 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.560 suites 1 1 n/a 0 0 00:08:08.560 tests 24 24 24 0 0 00:08:08.560 asserts 198 198 198 0 n/a 00:08:08.560 00:08:08.560 Elapsed time = 0.001 seconds 00:08:08.560 10:33:35 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:08.560 00:08:08.560 00:08:08.560 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.560 http://cunit.sourceforge.net/ 00:08:08.560 00:08:08.560 00:08:08.560 Suite: nvme_ctrlr_cmd 00:08:08.560 Test: test_geometry_cmd ...passed 00:08:08.560 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:08.560 00:08:08.560 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.560 suites 1 1 n/a 0 0 00:08:08.560 tests 2 2 2 0 0 00:08:08.560 asserts 7 7 7 0 n/a 00:08:08.560 00:08:08.560 Elapsed time = 0.000 seconds 00:08:08.560 10:33:35 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:08.560 00:08:08.560 00:08:08.560 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.560 http://cunit.sourceforge.net/ 00:08:08.560 00:08:08.560 00:08:08.560 Suite: nvme 00:08:08.560 Test: test_nvme_ns_construct ...passed 00:08:08.560 Test: test_nvme_ns_uuid ...passed 00:08:08.560 Test: test_nvme_ns_csi ...passed 00:08:08.560 Test: test_nvme_ns_data ...passed 00:08:08.560 Test: test_nvme_ns_set_identify_data ...passed 00:08:08.560 Test: test_spdk_nvme_ns_get_values ...passed 00:08:08.560 Test: test_spdk_nvme_ns_is_active ...passed 00:08:08.560 Test: spdk_nvme_ns_supports ...passed 00:08:08.560 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:08.560 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:08.560 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:08.560 Test: test_nvme_ns_find_id_desc ...passed 00:08:08.560 00:08:08.560 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.560 suites 1 1 n/a 0 0 00:08:08.560 tests 12 12 12 0 0 00:08:08.560 asserts 83 83 83 0 n/a 00:08:08.560 00:08:08.560 Elapsed time = 0.000 seconds 00:08:08.560 10:33:35 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:08.560 00:08:08.560 00:08:08.560 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.560 http://cunit.sourceforge.net/ 00:08:08.560 00:08:08.560 00:08:08.560 Suite: nvme_ns_cmd 00:08:08.560 Test: split_test ...passed 00:08:08.560 Test: split_test2 ...passed 00:08:08.560 Test: split_test3 ...passed 00:08:08.560 Test: split_test4 ...passed 00:08:08.560 Test: test_nvme_ns_cmd_flush ...passed 00:08:08.560 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:08.560 Test: test_nvme_ns_cmd_copy ...passed 00:08:08.560 Test: test_io_flags ...[2024-07-24 10:33:35.202801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:08.560 passed 00:08:08.560 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:08.560 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:08.560 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:08.560 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:08.560 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:08.560 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:08.560 Test: test_cmd_child_request ...passed 00:08:08.560 Test: test_nvme_ns_cmd_readv ...passed 00:08:08.560 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:08.560 Test: test_nvme_ns_cmd_writev ...[2024-07-24 10:33:35.204046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:08.560 passed 00:08:08.560 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:08.560 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:08.560 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:08.560 Test: test_nvme_ns_cmd_comparev ...passed 00:08:08.560 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:08.560 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:08.560 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:08.560 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:08.560 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:08.560 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-24 10:33:35.205841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:08.560 passed 00:08:08.560 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:08:08.560 Test: test_nvme_ns_cmd_verify ...[2024-07-24 10:33:35.205965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:08.560 passed 00:08:08.560 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:08.560 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:08.560 00:08:08.560 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.560 suites 1 1 n/a 0 0 00:08:08.560 tests 32 32 32 0 0 00:08:08.560 asserts 550 550 550 0 n/a 00:08:08.560 00:08:08.560 Elapsed time = 0.004 seconds 00:08:08.560 10:33:35 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:08.560 00:08:08.560 00:08:08.560 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.560 http://cunit.sourceforge.net/ 00:08:08.560 00:08:08.560 00:08:08.560 Suite: nvme_ns_cmd 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:08.820 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:08.820 00:08:08.820 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.820 suites 1 1 n/a 0 0 00:08:08.820 tests 12 12 12 0 0 00:08:08.820 asserts 123 123 123 0 n/a 00:08:08.820 00:08:08.820 Elapsed time = 0.001 seconds 00:08:08.820 10:33:35 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:08.820 00:08:08.820 00:08:08.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.820 http://cunit.sourceforge.net/ 00:08:08.820 00:08:08.820 00:08:08.820 Suite: nvme_qpair 00:08:08.820 Test: test3 ...passed 00:08:08.820 Test: test_ctrlr_failed ...passed 00:08:08.820 Test: struct_packing ...passed 00:08:08.820 Test: test_nvme_qpair_process_completions ...[2024-07-24 10:33:35.269089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:08.820 [2024-07-24 10:33:35.269524] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:08.820 [2024-07-24 10:33:35.269600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:08.820 [2024-07-24 10:33:35.269724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:08.820 passed 00:08:08.820 Test: test_nvme_completion_is_retry ...passed 00:08:08.820 Test: test_get_status_string ...passed 00:08:08.820 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:08.820 Test: test_nvme_qpair_submit_request ...passed 00:08:08.820 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:08.820 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:08.820 Test: test_nvme_qpair_init_deinit ...[2024-07-24 10:33:35.270243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:08.820 passed 00:08:08.820 Test: test_nvme_get_sgl_print_info ...passed 00:08:08.820 00:08:08.820 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.820 suites 1 1 n/a 0 0 00:08:08.820 tests 12 12 12 0 0 00:08:08.820 asserts 154 154 154 0 n/a 00:08:08.820 00:08:08.820 Elapsed time = 0.002 seconds 00:08:08.820 10:33:35 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:08.820 00:08:08.820 00:08:08.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.820 http://cunit.sourceforge.net/ 00:08:08.820 00:08:08.820 00:08:08.820 Suite: nvme_pcie 00:08:08.820 Test: test_prp_list_append ...[2024-07-24 10:33:35.298454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:08.820 [2024-07-24 10:33:35.298780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:08.820 [2024-07-24 10:33:35.298856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:08.820 [2024-07-24 10:33:35.299140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:08.820 passed 00:08:08.820 Test: test_nvme_pcie_hotplug_monitor ...[2024-07-24 10:33:35.299261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:08.820 passed 00:08:08.820 Test: test_shadow_doorbell_update ...passed 00:08:08.820 Test: test_build_contig_hw_sgl_request ...passed 00:08:08.820 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:08.820 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:08.820 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:08.820 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:08:08.820 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed[2024-07-24 10:33:35.299495] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:08.820 00:08:08.820 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:08.820 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:08:08.820 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-24 10:33:35.299634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:08.820 passed 00:08:08.820 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-24 10:33:35.299745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:08.820 passed 00:08:08.820 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-24 10:33:35.299811] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:08.820 [2024-07-24 10:33:35.299878] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:08.820 passed 00:08:08.820 00:08:08.820 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.820 suites 1 1 n/a 0 0 00:08:08.820 tests 14 14 14 0 0 00:08:08.820 asserts 235 235 235 0 n/a 00:08:08.820 00:08:08.820 Elapsed time = 0.002 seconds 00:08:08.820 10:33:35 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:08.820 00:08:08.820 00:08:08.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.820 http://cunit.sourceforge.net/ 00:08:08.820 00:08:08.820 00:08:08.820 Suite: nvme_ns_cmd 00:08:08.820 Test: nvme_poll_group_create_test ...passed 00:08:08.820 Test: nvme_poll_group_add_remove_test ...passed 00:08:08.820 Test: nvme_poll_group_process_completions ...passed 00:08:08.820 Test: nvme_poll_group_destroy_test ...passed 00:08:08.820 Test: nvme_poll_group_get_free_stats ...passed 00:08:08.820 00:08:08.820 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.820 suites 1 1 n/a 0 0 00:08:08.820 tests 5 5 5 0 0 00:08:08.820 asserts 75 75 75 0 n/a 00:08:08.820 00:08:08.820 Elapsed time = 0.001 seconds 00:08:08.820 10:33:35 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:08.820 00:08:08.820 00:08:08.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.820 http://cunit.sourceforge.net/ 00:08:08.820 00:08:08.820 00:08:08.820 Suite: nvme_quirks 00:08:08.820 Test: test_nvme_quirks_striping ...passed 00:08:08.820 00:08:08.820 Run Summary: Type Total Ran Passed Failed Inactive 00:08:08.820 suites 1 1 n/a 0 0 00:08:08.820 tests 1 1 1 0 0 00:08:08.820 asserts 5 5 5 0 n/a 00:08:08.820 00:08:08.820 Elapsed time = 0.000 seconds 00:08:08.820 10:33:35 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:08.820 00:08:08.820 00:08:08.820 CUnit - A unit testing framework for C - Version 2.1-3 00:08:08.820 http://cunit.sourceforge.net/ 00:08:08.820 00:08:08.820 00:08:08.820 Suite: nvme_tcp 00:08:08.820 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:08.820 Test: test_nvme_tcp_build_iovs ...passed 00:08:08.820 Test: test_nvme_tcp_build_sgl_request ...[2024-07-24 10:33:35.386872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffc05d1b990, and the iovcnt=16, remaining_size=28672 00:08:08.820 passed 00:08:08.820 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:08.820 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:08.821 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:08.821 Test: test_nvme_tcp_req_get ...passed 00:08:08.821 Test: test_nvme_tcp_req_init ...passed 00:08:08.821 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:08.821 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:08.821 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:08:08.821 Test: test_nvme_tcp_alloc_reqs ...[2024-07-24 10:33:35.387689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1d6b0 is same with the state(6) to be set 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:08:08.821 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-24 10:33:35.388049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1c840 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffc05d1d370 00:08:08.821 [2024-07-24 10:33:35.388182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:08.821 [2024-07-24 10:33:35.388294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388369] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:08.821 [2024-07-24 10:33:35.388482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:08.821 [2024-07-24 10:33:35.388565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388775] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.388882] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1cd00 is same with the state(5) to be set 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-24 10:33:35.389079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:08.821 [2024-07-24 10:33:35.389150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:08.821 [2024-07-24 10:33:35.389442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:08.821 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-24 10:33:35.389558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc05d1ceb0): PDU Sequence Error 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_icresp_handle ...[2024-07-24 10:33:35.389729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:08.821 [2024-07-24 10:33:35.389782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:08.821 [2024-07-24 10:33:35.389836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1c850 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.389899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:08.821 [2024-07-24 10:33:35.389949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1c850 is same with the state(5) to be set 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-24 10:33:35.390028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1c850 is same with the state(0) to be set 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-24 10:33:35.390107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc05d1d370): PDU Sequence Error 00:08:08.821 [2024-07-24 10:33:35.390197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffc05d1bb30 00:08:08.821 passed 00:08:08.821 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:08.821 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-24 10:33:35.390403] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffc05d1b1b0, errno=0, rc=0 00:08:08.821 [2024-07-24 10:33:35.390467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1b1b0 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.390550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc05d1b1b0 is same with the state(5) to be set 00:08:08.821 [2024-07-24 10:33:35.390604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc05d1b1b0 (0): Success 00:08:08.821 [2024-07-24 10:33:35.390658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc05d1b1b0 (0): Success 00:08:08.821 passed 00:08:09.079 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-24 10:33:35.499422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:09.079 passed 00:08:09.079 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:09.079 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-24 10:33:35.499592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:09.079 [2024-07-24 10:33:35.499828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:09.079 passed 00:08:09.079 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-24 10:33:35.499888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:09.079 [2024-07-24 10:33:35.500101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:09.079 [2024-07-24 10:33:35.500162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:09.079 [2024-07-24 10:33:35.500303] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:09.079 [2024-07-24 10:33:35.500371] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:09.079 [2024-07-24 10:33:35.500502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:08:09.079 passed 00:08:09.079 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-24 10:33:35.500597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:09.079 [2024-07-24 10:33:35.500754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:08:09.079 [2024-07-24 10:33:35.500825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:09.079 passed 00:08:09.079 00:08:09.079 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.079 suites 1 1 n/a 0 0 00:08:09.079 tests 27 27 27 0 0 00:08:09.079 asserts 624 624 624 0 n/a 00:08:09.079 00:08:09.079 Elapsed time = 0.114 seconds 00:08:09.079 10:33:35 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:09.079 00:08:09.079 00:08:09.079 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.079 http://cunit.sourceforge.net/ 00:08:09.079 00:08:09.079 00:08:09.079 Suite: nvme_transport 00:08:09.079 Test: test_nvme_get_transport ...passed 00:08:09.079 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:09.079 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:09.079 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:09.079 Test: test_ctrlr_get_memory_domains ...passed 00:08:09.079 00:08:09.079 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.079 suites 1 1 n/a 0 0 00:08:09.079 tests 5 5 5 0 0 00:08:09.079 asserts 28 28 28 0 n/a 00:08:09.079 00:08:09.079 Elapsed time = 0.000 seconds 00:08:09.079 10:33:35 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:09.079 00:08:09.079 00:08:09.079 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.079 http://cunit.sourceforge.net/ 00:08:09.079 00:08:09.079 00:08:09.079 Suite: nvme_io_msg 00:08:09.079 Test: test_nvme_io_msg_send ...passed 00:08:09.079 Test: test_nvme_io_msg_process ...passed 00:08:09.079 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:09.079 00:08:09.079 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.079 suites 1 1 n/a 0 0 00:08:09.079 tests 3 3 3 0 0 00:08:09.079 asserts 56 56 56 0 n/a 00:08:09.079 00:08:09.079 Elapsed time = 0.000 seconds 00:08:09.079 10:33:35 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:09.079 00:08:09.079 00:08:09.079 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.079 http://cunit.sourceforge.net/ 00:08:09.079 00:08:09.079 00:08:09.080 Suite: nvme_pcie_common 00:08:09.080 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-24 10:33:35.603283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:09.080 passed 00:08:09.080 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:09.080 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:09.080 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-24 10:33:35.604290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:09.080 [2024-07-24 10:33:35.604476] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:09.080 [2024-07-24 10:33:35.604541] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:09.080 passed 00:08:09.080 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:08:09.080 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-24 10:33:35.605075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:09.080 passed 00:08:09.080 00:08:09.080 [2024-07-24 10:33:35.605152] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:09.080 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.080 suites 1 1 n/a 0 0 00:08:09.080 tests 6 6 6 0 0 00:08:09.080 asserts 148 148 148 0 n/a 00:08:09.080 00:08:09.080 Elapsed time = 0.002 seconds 00:08:09.080 10:33:35 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:09.080 00:08:09.080 00:08:09.080 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.080 http://cunit.sourceforge.net/ 00:08:09.080 00:08:09.080 00:08:09.080 Suite: nvme_fabric 00:08:09.080 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:09.080 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:09.080 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:09.080 Test: test_nvme_fabric_discover_probe ...passed 00:08:09.080 Test: test_nvme_fabric_qpair_connect ...[2024-07-24 10:33:35.635798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:09.080 passed 00:08:09.080 00:08:09.080 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.080 suites 1 1 n/a 0 0 00:08:09.080 tests 5 5 5 0 0 00:08:09.080 asserts 60 60 60 0 n/a 00:08:09.080 00:08:09.080 Elapsed time = 0.001 seconds 00:08:09.080 10:33:35 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:09.080 00:08:09.080 00:08:09.080 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.080 http://cunit.sourceforge.net/ 00:08:09.080 00:08:09.080 00:08:09.080 Suite: nvme_opal 00:08:09.080 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:09.080 Test: test_opal_add_short_atom_header ...[2024-07-24 10:33:35.666467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:09.080 passed 00:08:09.080 00:08:09.080 Run Summary: Type Total Ran Passed Failed Inactive 00:08:09.080 suites 1 1 n/a 0 0 00:08:09.080 tests 2 2 2 0 0 00:08:09.080 asserts 22 22 22 0 n/a 00:08:09.080 00:08:09.080 Elapsed time = 0.001 seconds 00:08:09.080 00:08:09.080 real 0m1.233s 00:08:09.080 user 0m0.631s 00:08:09.080 sys 0m0.460s 00:08:09.080 10:33:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.080 10:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 ************************************ 00:08:09.080 END TEST unittest_nvme 00:08:09.080 ************************************ 00:08:09.080 10:33:35 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:09.080 10:33:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:09.080 10:33:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.080 10:33:35 -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 ************************************ 00:08:09.080 START TEST unittest_log 00:08:09.080 ************************************ 00:08:09.080 10:33:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:09.080 00:08:09.080 00:08:09.080 CUnit - A unit testing framework for C - Version 2.1-3 00:08:09.080 http://cunit.sourceforge.net/ 00:08:09.080 00:08:09.080 00:08:09.080 Suite: log 00:08:09.080 Test: log_test ...[2024-07-24 10:33:35.742487] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:08:09.080 [2024-07-24 10:33:35.743083] log_ut.c: 55:log_test: *DEBUG*: log test 00:08:09.080 log dump test: 00:08:09.080 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:09.080 passed 00:08:09.080 Test: deprecation ...spdk dump test: 00:08:09.080 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:09.080 spdk dump test: 00:08:09.080 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:09.080 00000010 65 20 63 68 61 72 73 e chars 00:08:10.455 passed 00:08:10.455 00:08:10.455 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.455 suites 1 1 n/a 0 0 00:08:10.455 tests 2 2 2 0 0 00:08:10.455 asserts 73 73 73 0 n/a 00:08:10.455 00:08:10.455 Elapsed time = 0.001 seconds 00:08:10.455 00:08:10.455 real 0m1.035s 00:08:10.455 user 0m0.017s 00:08:10.455 sys 0m0.019s 00:08:10.455 10:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.455 ************************************ 00:08:10.455 END TEST unittest_log 00:08:10.455 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.455 ************************************ 00:08:10.455 10:33:36 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:10.455 10:33:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.455 10:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.455 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.455 ************************************ 00:08:10.455 START TEST unittest_lvol 00:08:10.455 ************************************ 00:08:10.456 10:33:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:10.456 00:08:10.456 00:08:10.456 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.456 http://cunit.sourceforge.net/ 00:08:10.456 00:08:10.456 00:08:10.456 Suite: lvol 00:08:10.456 Test: lvs_init_unload_success ...[2024-07-24 10:33:36.836296] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:10.456 passed 00:08:10.456 Test: lvs_init_destroy_success ...[2024-07-24 10:33:36.837060] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:10.456 passed 00:08:10.456 Test: lvs_init_opts_success ...passed 00:08:10.456 Test: lvs_unload_lvs_is_null_fail ...[2024-07-24 10:33:36.837422] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:10.456 passed 00:08:10.456 Test: lvs_names ...[2024-07-24 10:33:36.837522] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:10.456 [2024-07-24 10:33:36.837598] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:10.456 [2024-07-24 10:33:36.837829] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:10.456 passed 00:08:10.456 Test: lvol_create_destroy_success ...passed 00:08:10.456 Test: lvol_create_fail ...[2024-07-24 10:33:36.838560] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:10.456 [2024-07-24 10:33:36.838700] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:10.456 passed 00:08:10.456 Test: lvol_destroy_fail ...[2024-07-24 10:33:36.839071] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:10.456 passed 00:08:10.456 Test: lvol_close ...[2024-07-24 10:33:36.839380] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:10.456 [2024-07-24 10:33:36.839474] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:10.456 passed 00:08:10.456 Test: lvol_resize ...passed 00:08:10.456 Test: lvol_set_read_only ...passed 00:08:10.456 Test: test_lvs_load ...[2024-07-24 10:33:36.840913] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:10.456 [2024-07-24 10:33:36.841114] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:10.456 passed 00:08:10.456 Test: lvols_load ...[2024-07-24 10:33:36.841839] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:10.456 [2024-07-24 10:33:36.842157] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:10.456 passed 00:08:10.456 Test: lvol_open ...passed 00:08:10.456 Test: lvol_snapshot ...passed 00:08:10.456 Test: lvol_snapshot_fail ...[2024-07-24 10:33:36.843814] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:10.456 passed 00:08:10.456 Test: lvol_clone ...passed 00:08:10.456 Test: lvol_clone_fail ...[2024-07-24 10:33:36.845069] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:10.456 passed 00:08:10.456 Test: lvol_iter_clones ...passed 00:08:10.456 Test: lvol_refcnt ...[2024-07-24 10:33:36.846200] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 3575a01d-94c0-4c2d-aa9f-0c20c099c063 because it is still open 00:08:10.456 passed 00:08:10.456 Test: lvol_names ...[2024-07-24 10:33:36.846871] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:10.456 [2024-07-24 10:33:36.847159] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:10.456 [2024-07-24 10:33:36.847617] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:10.456 passed 00:08:10.456 Test: lvol_create_thin_provisioned ...passed 00:08:10.456 Test: lvol_rename ...[2024-07-24 10:33:36.848735] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:10.456 [2024-07-24 10:33:36.849025] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:10.456 passed 00:08:10.456 Test: lvs_rename ...[2024-07-24 10:33:36.849701] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:10.456 passed 00:08:10.456 Test: lvol_inflate ...[2024-07-24 10:33:36.850344] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:10.456 passed 00:08:10.456 Test: lvol_decouple_parent ...[2024-07-24 10:33:36.851026] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:10.456 passed 00:08:10.456 Test: lvol_get_xattr ...passed 00:08:10.456 Test: lvol_esnap_reload ...passed 00:08:10.456 Test: lvol_esnap_create_bad_args ...[2024-07-24 10:33:36.852465] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:10.456 [2024-07-24 10:33:36.852675] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:10.456 [2024-07-24 10:33:36.852927] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:10.456 [2024-07-24 10:33:36.853273] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:10.456 [2024-07-24 10:33:36.853655] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:10.456 passed 00:08:10.456 Test: lvol_esnap_create_delete ...passed 00:08:10.456 Test: lvol_esnap_load_esnaps ...[2024-07-24 10:33:36.854594] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:10.456 passed 00:08:10.456 Test: lvol_esnap_missing ...[2024-07-24 10:33:36.855123] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:10.456 [2024-07-24 10:33:36.855318] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:10.456 passed 00:08:10.456 Test: lvol_esnap_hotplug ... 00:08:10.456 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:10.456 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:10.456 [2024-07-24 10:33:36.856829] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 3278a3b5-e362-4409-aebe-bc7eac88da3a: failed to create esnap bs_dev: error -12 00:08:10.456 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:10.456 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:10.456 [2024-07-24 10:33:36.857456] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol e33efaef-ae3d-4532-91a8-53432ad8df7d: failed to create esnap bs_dev: error -12 00:08:10.456 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:10.456 [2024-07-24 10:33:36.857871] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1b763d5a-2c2c-4d2c-92b4-1559cafbe8b7: failed to create esnap bs_dev: error -12 00:08:10.456 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:10.456 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:10.456 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:10.456 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:10.456 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:10.456 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:10.456 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:10.456 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:10.456 passed 00:08:10.456 Test: lvol_get_by ...passed 00:08:10.456 00:08:10.456 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.456 suites 1 1 n/a 0 0 00:08:10.456 tests 34 34 34 0 0 00:08:10.456 asserts 1439 1439 1439 0 n/a 00:08:10.456 00:08:10.456 Elapsed time = 0.016 seconds 00:08:10.456 00:08:10.456 real 0m0.061s 00:08:10.456 user 0m0.026s 00:08:10.456 ************************************ 00:08:10.456 END TEST unittest_lvol 00:08:10.456 ************************************ 00:08:10.456 sys 0m0.027s 00:08:10.456 10:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.456 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.456 10:33:36 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:10.456 10:33:36 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:10.456 10:33:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.456 10:33:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.456 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.456 ************************************ 00:08:10.456 START TEST unittest_nvme_rdma 00:08:10.456 ************************************ 00:08:10.456 10:33:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:10.456 00:08:10.456 00:08:10.456 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.456 http://cunit.sourceforge.net/ 00:08:10.456 00:08:10.456 00:08:10.456 Suite: nvme_rdma 00:08:10.456 Test: test_nvme_rdma_build_sgl_request ...[2024-07-24 10:33:36.952631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:10.456 [2024-07-24 10:33:36.953588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:10.456 [2024-07-24 10:33:36.953908] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:10.456 passed 00:08:10.456 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:10.456 Test: test_nvme_rdma_build_contig_request ...[2024-07-24 10:33:36.954472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:10.456 passed 00:08:10.456 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:10.457 Test: test_nvme_rdma_create_reqs ...[2024-07-24 10:33:36.955066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:10.457 passed 00:08:10.457 Test: test_nvme_rdma_create_rsps ...[2024-07-24 10:33:36.955819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:10.457 passed 00:08:10.457 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-24 10:33:36.956366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:10.457 [2024-07-24 10:33:36.956625] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:10.457 passed 00:08:10.457 Test: test_nvme_rdma_poller_create ...passed 00:08:10.457 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-24 10:33:36.957320] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:10.457 passed 00:08:10.457 Test: test_nvme_rdma_ctrlr_construct ...passed 00:08:10.457 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:10.457 Test: test_nvme_rdma_req_init ...passed 00:08:10.457 Test: test_nvme_rdma_validate_cm_event ...[2024-07-24 10:33:36.958321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:10.457 [2024-07-24 10:33:36.958565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:10.457 passed 00:08:10.457 Test: test_nvme_rdma_qpair_init ...passed 00:08:10.457 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:10.457 Test: test_nvme_rdma_memory_domain ...[2024-07-24 10:33:36.959401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:10.457 passed 00:08:10.457 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:10.457 Test: test_rdma_get_memory_translation ...[2024-07-24 10:33:36.959943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:10.457 [2024-07-24 10:33:36.960206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:10.457 passed 00:08:10.457 Test: test_get_rdma_qpair_from_wc ...passed 00:08:10.457 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:10.457 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-24 10:33:36.960950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:10.457 [2024-07-24 10:33:36.961201] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:10.457 passed 00:08:10.457 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-24 10:33:36.961661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:10.457 [2024-07-24 10:33:36.961925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:10.457 [2024-07-24 10:33:36.962155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe3a3a0c90 on poll group 0x60b0000001a0 00:08:10.457 [2024-07-24 10:33:36.962432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:10.457 [2024-07-24 10:33:36.962683] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:10.457 [2024-07-24 10:33:36.962921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffe3a3a0c90 on poll group 0x60b0000001a0 00:08:10.457 [2024-07-24 10:33:36.963195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:10.457 passed 00:08:10.457 00:08:10.457 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.457 suites 1 1 n/a 0 0 00:08:10.457 tests 22 22 22 0 0 00:08:10.457 asserts 412 412 412 0 n/a 00:08:10.457 00:08:10.457 Elapsed time = 0.004 seconds 00:08:10.457 00:08:10.457 real 0m0.044s 00:08:10.457 user 0m0.019s 00:08:10.457 sys 0m0.017s 00:08:10.457 10:33:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.457 10:33:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.457 ************************************ 00:08:10.457 END TEST unittest_nvme_rdma 00:08:10.457 ************************************ 00:08:10.457 10:33:37 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:10.457 10:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.457 10:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.457 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.457 ************************************ 00:08:10.457 START TEST unittest_nvmf_transport 00:08:10.457 ************************************ 00:08:10.457 10:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:10.457 00:08:10.457 00:08:10.457 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.457 http://cunit.sourceforge.net/ 00:08:10.457 00:08:10.457 00:08:10.457 Suite: nvmf 00:08:10.457 Test: test_spdk_nvmf_transport_create ...[2024-07-24 10:33:37.064110] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:10.457 [2024-07-24 10:33:37.064616] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:10.457 [2024-07-24 10:33:37.064813] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:10.457 [2024-07-24 10:33:37.065125] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:10.457 passed 00:08:10.457 Test: test_nvmf_transport_poll_group_create ...passed 00:08:10.457 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-24 10:33:37.065850] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:10.457 [2024-07-24 10:33:37.066053] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:10.457 [2024-07-24 10:33:37.066182] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:10.457 passed 00:08:10.457 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:10.457 00:08:10.457 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.457 suites 1 1 n/a 0 0 00:08:10.457 tests 4 4 4 0 0 00:08:10.457 asserts 49 49 49 0 n/a 00:08:10.457 00:08:10.457 Elapsed time = 0.002 seconds 00:08:10.457 00:08:10.457 real 0m0.045s 00:08:10.457 user 0m0.020s 00:08:10.457 sys 0m0.023s 00:08:10.457 10:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.457 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.457 ************************************ 00:08:10.457 END TEST unittest_nvmf_transport 00:08:10.457 ************************************ 00:08:10.716 10:33:37 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:10.716 10:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.716 10:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.716 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.716 ************************************ 00:08:10.716 START TEST unittest_rdma 00:08:10.716 ************************************ 00:08:10.716 10:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:10.716 00:08:10.716 00:08:10.716 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.716 http://cunit.sourceforge.net/ 00:08:10.716 00:08:10.716 00:08:10.716 Suite: rdma_common 00:08:10.716 Test: test_spdk_rdma_pd ...[2024-07-24 10:33:37.162672] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:10.716 [2024-07-24 10:33:37.163306] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:10.716 passed 00:08:10.716 00:08:10.716 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.716 suites 1 1 n/a 0 0 00:08:10.716 tests 1 1 1 0 0 00:08:10.716 asserts 31 31 31 0 n/a 00:08:10.716 00:08:10.716 Elapsed time = 0.001 seconds 00:08:10.716 00:08:10.716 real 0m0.033s 00:08:10.716 user 0m0.024s 00:08:10.716 sys 0m0.009s 00:08:10.716 10:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.716 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.716 ************************************ 00:08:10.716 END TEST unittest_rdma 00:08:10.716 ************************************ 00:08:10.716 10:33:37 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:10.716 10:33:37 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:10.716 10:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.716 10:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.716 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.716 ************************************ 00:08:10.716 START TEST unittest_nvme_cuse 00:08:10.716 ************************************ 00:08:10.716 10:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:10.716 00:08:10.716 00:08:10.716 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.716 http://cunit.sourceforge.net/ 00:08:10.716 00:08:10.716 00:08:10.716 Suite: nvme_cuse 00:08:10.716 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:10.716 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:10.716 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:10.716 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:10.716 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:10.716 Test: test_cuse_nvme_submit_io ...[2024-07-24 10:33:37.255685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:10.716 passed 00:08:10.716 Test: test_cuse_nvme_reset ...[2024-07-24 10:33:37.256076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:10.716 passed 00:08:10.716 Test: test_nvme_cuse_stop ...passed 00:08:10.716 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:10.716 00:08:10.716 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.716 suites 1 1 n/a 0 0 00:08:10.716 tests 9 9 9 0 0 00:08:10.716 asserts 121 121 121 0 n/a 00:08:10.716 00:08:10.716 Elapsed time = 0.002 seconds 00:08:10.716 00:08:10.716 real 0m0.038s 00:08:10.716 user 0m0.022s 00:08:10.716 sys 0m0.017s 00:08:10.716 10:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.716 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.716 ************************************ 00:08:10.716 END TEST unittest_nvme_cuse 00:08:10.716 ************************************ 00:08:10.716 10:33:37 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:08:10.716 10:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.716 10:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.716 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:10.716 ************************************ 00:08:10.717 START TEST unittest_nvmf 00:08:10.717 ************************************ 00:08:10.717 10:33:37 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:08:10.717 10:33:37 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:10.717 00:08:10.717 00:08:10.717 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.717 http://cunit.sourceforge.net/ 00:08:10.717 00:08:10.717 00:08:10.717 Suite: nvmf 00:08:10.717 Test: test_get_log_page ...[2024-07-24 10:33:37.354270] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:10.717 passed 00:08:10.717 Test: test_process_fabrics_cmd ...passed 00:08:10.717 Test: test_connect ...[2024-07-24 10:33:37.355324] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:10.717 [2024-07-24 10:33:37.355468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:10.717 [2024-07-24 10:33:37.355585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:10.717 [2024-07-24 10:33:37.355643] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:10.717 [2024-07-24 10:33:37.355765] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:10.717 [2024-07-24 10:33:37.355835] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:10.717 [2024-07-24 10:33:37.355966] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:10.717 [2024-07-24 10:33:37.356028] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:10.717 [2024-07-24 10:33:37.356195] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:10.717 [2024-07-24 10:33:37.356304] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:10.717 [2024-07-24 10:33:37.356637] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:10.717 [2024-07-24 10:33:37.356773] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:10.717 [2024-07-24 10:33:37.356902] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:10.717 [2024-07-24 10:33:37.357034] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:10.717 [2024-07-24 10:33:37.357171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:08:10.717 [2024-07-24 10:33:37.357356] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:08:10.717 passed 00:08:10.717 Test: test_get_ns_id_desc_list ...passed 00:08:10.717 Test: test_identify_ns ...[2024-07-24 10:33:37.357643] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:10.717 [2024-07-24 10:33:37.357880] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:10.717 [2024-07-24 10:33:37.358049] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:10.717 passed 00:08:10.717 Test: test_identify_ns_iocs_specific ...[2024-07-24 10:33:37.358204] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:10.717 [2024-07-24 10:33:37.358506] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:10.717 passed 00:08:10.717 Test: test_reservation_write_exclusive ...passed 00:08:10.717 Test: test_reservation_exclusive_access ...passed 00:08:10.717 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:10.717 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:10.717 Test: test_reservation_notification_log_page ...passed 00:08:10.717 Test: test_get_dif_ctx ...passed 00:08:10.717 Test: test_set_get_features ...[2024-07-24 10:33:37.359076] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:10.717 [2024-07-24 10:33:37.359138] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:10.717 [2024-07-24 10:33:37.359200] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:10.717 [2024-07-24 10:33:37.359274] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:10.717 passed 00:08:10.717 Test: test_identify_ctrlr ...passed 00:08:10.717 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:10.717 Test: test_custom_admin_cmd ...passed 00:08:10.717 Test: test_fused_compare_and_write ...[2024-07-24 10:33:37.359769] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:10.717 [2024-07-24 10:33:37.359842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:10.717 passed 00:08:10.717 Test: test_multi_async_event_reqs ...passed 00:08:10.717 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:10.717 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:10.717 Test: test_multi_async_events ...[2024-07-24 10:33:37.359908] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:10.717 passed 00:08:10.717 Test: test_rae ...passed 00:08:10.717 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:10.717 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:10.717 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-24 10:33:37.360444] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:08:10.717 passed 00:08:10.717 Test: test_zcopy_read ...passed 00:08:10.717 Test: test_zcopy_write ...passed 00:08:10.717 Test: test_nvmf_property_set ...passed 00:08:10.717 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-24 10:33:37.360655] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:10.717 passed 00:08:10.717 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-24 10:33:37.360767] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:10.717 [2024-07-24 10:33:37.360832] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:10.717 [2024-07-24 10:33:37.360889] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:10.717 passed 00:08:10.717 00:08:10.717 [2024-07-24 10:33:37.360938] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:10.717 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.717 suites 1 1 n/a 0 0 00:08:10.717 tests 30 30 30 0 0 00:08:10.717 asserts 885 885 885 0 n/a 00:08:10.717 00:08:10.717 Elapsed time = 0.007 seconds 00:08:10.717 10:33:37 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:10.975 00:08:10.975 00:08:10.975 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.975 http://cunit.sourceforge.net/ 00:08:10.975 00:08:10.975 00:08:10.975 Suite: nvmf 00:08:10.975 Test: test_get_rw_params ...passed 00:08:10.975 Test: test_lba_in_range ...passed 00:08:10.975 Test: test_get_dif_ctx ...passed 00:08:10.975 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:10.975 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-24 10:33:37.396319] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:10.975 [2024-07-24 10:33:37.397074] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:10.975 [2024-07-24 10:33:37.397487] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:10.975 passed 00:08:10.975 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-24 10:33:37.397569] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:10.975 [2024-07-24 10:33:37.397732] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:10.975 passed 00:08:10.975 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-24 10:33:37.398280] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:10.975 [2024-07-24 10:33:37.398334] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:10.975 [2024-07-24 10:33:37.398425] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:10.976 [2024-07-24 10:33:37.398469] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:10.976 passed 00:08:10.976 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:10.976 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:10.976 00:08:10.976 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.976 suites 1 1 n/a 0 0 00:08:10.976 tests 9 9 9 0 0 00:08:10.976 asserts 157 157 157 0 n/a 00:08:10.976 00:08:10.976 Elapsed time = 0.003 seconds 00:08:10.976 10:33:37 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:10.976 00:08:10.976 00:08:10.976 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.976 http://cunit.sourceforge.net/ 00:08:10.976 00:08:10.976 00:08:10.976 Suite: nvmf 00:08:10.976 Test: test_discovery_log ...passed 00:08:10.976 Test: test_discovery_log_with_filters ...passed 00:08:10.976 00:08:10.976 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.976 suites 1 1 n/a 0 0 00:08:10.976 tests 2 2 2 0 0 00:08:10.976 asserts 238 238 238 0 n/a 00:08:10.976 00:08:10.976 Elapsed time = 0.003 seconds 00:08:10.976 10:33:37 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:10.976 00:08:10.976 00:08:10.976 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.976 http://cunit.sourceforge.net/ 00:08:10.976 00:08:10.976 00:08:10.976 Suite: nvmf 00:08:10.976 Test: nvmf_test_create_subsystem ...[2024-07-24 10:33:37.480240] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:10.976 [2024-07-24 10:33:37.480771] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:10.976 [2024-07-24 10:33:37.480930] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:10.976 [2024-07-24 10:33:37.481023] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:10.976 [2024-07-24 10:33:37.481100] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:10.976 [2024-07-24 10:33:37.481192] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:10.976 [2024-07-24 10:33:37.481392] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:10.976 [2024-07-24 10:33:37.481699] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:10.976 [2024-07-24 10:33:37.481883] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:10.976 [2024-07-24 10:33:37.481986] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:10.976 [2024-07-24 10:33:37.482050] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:10.976 passed 00:08:10.976 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-24 10:33:37.482367] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:10.976 [2024-07-24 10:33:37.482571] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:10.976 passed 00:08:10.976 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:10.976 Test: test_reservation_register ...[2024-07-24 10:33:37.482975] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 [2024-07-24 10:33:37.483214] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:10.976 passed 00:08:10.976 Test: test_reservation_register_with_ptpl ...passed 00:08:10.976 Test: test_reservation_acquire_preempt_1 ...[2024-07-24 10:33:37.484841] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:10.976 Test: test_reservation_release ...[2024-07-24 10:33:37.486806] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_reservation_unregister_notification ...[2024-07-24 10:33:37.487098] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_reservation_release_notification ...[2024-07-24 10:33:37.487442] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_reservation_release_notification_write_exclusive ...[2024-07-24 10:33:37.487740] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_reservation_clear_notification ...[2024-07-24 10:33:37.488059] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_reservation_preempt_notification ...[2024-07-24 10:33:37.488359] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:10.976 passed 00:08:10.976 Test: test_spdk_nvmf_ns_event ...passed 00:08:10.976 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:10.976 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:10.976 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-24 10:33:37.489277] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:10.976 [2024-07-24 10:33:37.489409] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:08:10.976 passed 00:08:10.976 Test: test_nvmf_ns_reservation_report ...[2024-07-24 10:33:37.489614] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:10.976 passed 00:08:10.976 Test: test_nvmf_nqn_is_valid ...[2024-07-24 10:33:37.489714] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:10.976 [2024-07-24 10:33:37.489767] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:d6413834-89b7-4825-bf76-c35a6aa69dc": uuid is not the correct length 00:08:10.976 passed 00:08:10.976 Test: test_nvmf_ns_reservation_restore ...[2024-07-24 10:33:37.489821] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:10.976 [2024-07-24 10:33:37.489947] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:10.976 passed 00:08:10.976 Test: test_nvmf_subsystem_state_change ...passed 00:08:10.976 Test: test_nvmf_reservation_custom_ops ...passed 00:08:10.976 00:08:10.976 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.976 suites 1 1 n/a 0 0 00:08:10.976 tests 22 22 22 0 0 00:08:10.976 asserts 407 407 407 0 n/a 00:08:10.976 00:08:10.976 Elapsed time = 0.011 seconds 00:08:10.976 10:33:37 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:10.976 00:08:10.976 00:08:10.976 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.976 http://cunit.sourceforge.net/ 00:08:10.976 00:08:10.976 00:08:10.976 Suite: nvmf 00:08:10.976 Test: test_nvmf_tcp_create ...[2024-07-24 10:33:37.565547] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:10.976 passed 00:08:10.976 Test: test_nvmf_tcp_destroy ...passed 00:08:10.976 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:10.976 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:10.976 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:10.976 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:11.235 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:11.235 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-24 10:33:37.671224] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.671336] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 passed 00:08:11.235 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:11.235 Test: test_nvmf_tcp_icreq_handle ...[2024-07-24 10:33:37.671447] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.671499] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.671568] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.671670] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:11.235 [2024-07-24 10:33:37.671771] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.671849] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.671889] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:11.235 [2024-07-24 10:33:37.671925] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.671966] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672009] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.672058] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672121] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 passed 00:08:11.235 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:11.235 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-24 10:33:37.672216] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:11.235 [2024-07-24 10:33:37.672269] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672307] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed280 is same with the state(5) to be set 00:08:11.235 passed 00:08:11.235 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-24 10:33:37.672364] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffdda8edfe0 00:08:11.235 [2024-07-24 10:33:37.672457] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672529] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.672579] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffdda8ed740 00:08:11.235 [2024-07-24 10:33:37.672620] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672663] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.672714] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:11.235 [2024-07-24 10:33:37.672759] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672821] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.672869] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:11.235 [2024-07-24 10:33:37.672910] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.672952] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.672992] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.673039] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.673102] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.235 [2024-07-24 10:33:37.673147] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.235 [2024-07-24 10:33:37.673197] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.236 [2024-07-24 10:33:37.673234] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.236 [2024-07-24 10:33:37.673280] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.236 [2024-07-24 10:33:37.673317] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.236 [2024-07-24 10:33:37.673378] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.236 passed 00:08:11.236 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-24 10:33:37.673417] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.236 [2024-07-24 10:33:37.673461] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:11.236 [2024-07-24 10:33:37.673500] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdda8ed740 is same with the state(5) to be set 00:08:11.236 passed 00:08:11.236 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-24 10:33:37.698253] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:11.236 passed 00:08:11.236 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-24 10:33:37.698380] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:11.236 [2024-07-24 10:33:37.698852] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:11.236 [2024-07-24 10:33:37.698922] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:11.236 passed 00:08:11.236 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-24 10:33:37.699183] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:11.236 passed 00:08:11.236 00:08:11.236 [2024-07-24 10:33:37.699252] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:11.236 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.236 suites 1 1 n/a 0 0 00:08:11.236 tests 17 17 17 0 0 00:08:11.236 asserts 222 222 222 0 n/a 00:08:11.236 00:08:11.236 Elapsed time = 0.166 seconds 00:08:11.236 10:33:37 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:11.236 00:08:11.236 00:08:11.236 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.236 http://cunit.sourceforge.net/ 00:08:11.236 00:08:11.236 00:08:11.236 Suite: nvmf 00:08:11.236 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:11.236 00:08:11.236 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.236 suites 1 1 n/a 0 0 00:08:11.236 tests 1 1 1 0 0 00:08:11.236 asserts 17 17 17 0 n/a 00:08:11.236 00:08:11.236 Elapsed time = 0.022 seconds 00:08:11.236 ************************************ 00:08:11.236 END TEST unittest_nvmf 00:08:11.236 ************************************ 00:08:11.236 00:08:11.236 real 0m0.539s 00:08:11.236 user 0m0.256s 00:08:11.236 sys 0m0.285s 00:08:11.236 10:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.236 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:11.236 10:33:37 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:11.236 10:33:37 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:11.495 10:33:37 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:11.495 10:33:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.495 10:33:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.495 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:11.495 ************************************ 00:08:11.495 START TEST unittest_nvmf_rdma 00:08:11.495 ************************************ 00:08:11.495 10:33:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:11.495 00:08:11.495 00:08:11.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.495 http://cunit.sourceforge.net/ 00:08:11.495 00:08:11.495 00:08:11.495 Suite: nvmf 00:08:11.495 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-24 10:33:37.949848] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:11.495 [2024-07-24 10:33:37.950245] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:11.495 [2024-07-24 10:33:37.950317] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:11.495 passed 00:08:11.495 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:11.495 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:11.495 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:11.495 Test: test_nvmf_rdma_opts_init ...passed 00:08:11.495 Test: test_nvmf_rdma_request_free_data ...passed 00:08:11.495 Test: test_nvmf_rdma_update_ibv_state ...[2024-07-24 10:33:37.951864] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:08:11.495 passed 00:08:11.495 Test: test_nvmf_rdma_resources_create ...[2024-07-24 10:33:37.951960] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:08:11.495 passed 00:08:11.495 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:11.495 Test: test_nvmf_rdma_resize_cq ...[2024-07-24 10:33:37.953630] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:11.495 Using CQ of insufficient size may lead to CQ overrun 00:08:11.495 [2024-07-24 10:33:37.953765] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:11.495 [2024-07-24 10:33:37.953851] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:11.495 passed 00:08:11.495 00:08:11.495 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.495 suites 1 1 n/a 0 0 00:08:11.495 tests 10 10 10 0 0 00:08:11.495 asserts 584 584 584 0 n/a 00:08:11.495 00:08:11.495 Elapsed time = 0.004 seconds 00:08:11.495 00:08:11.495 real 0m0.046s 00:08:11.495 user 0m0.029s 00:08:11.495 sys 0m0.017s 00:08:11.495 10:33:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.495 10:33:37 -- common/autotest_common.sh@10 -- # set +x 00:08:11.495 ************************************ 00:08:11.495 END TEST unittest_nvmf_rdma 00:08:11.495 ************************************ 00:08:11.495 10:33:38 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:11.495 10:33:38 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:08:11.495 10:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.495 10:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.495 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:11.495 ************************************ 00:08:11.495 START TEST unittest_scsi 00:08:11.495 ************************************ 00:08:11.495 10:33:38 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:08:11.495 10:33:38 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:11.495 00:08:11.495 00:08:11.495 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.495 http://cunit.sourceforge.net/ 00:08:11.495 00:08:11.495 00:08:11.495 Suite: dev_suite 00:08:11.495 Test: dev_destruct_null_dev ...passed 00:08:11.495 Test: dev_destruct_zero_luns ...passed 00:08:11.495 Test: dev_destruct_null_lun ...passed 00:08:11.495 Test: dev_destruct_success ...passed 00:08:11.495 Test: dev_construct_num_luns_zero ...[2024-07-24 10:33:38.044482] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:11.495 passed 00:08:11.496 Test: dev_construct_no_lun_zero ...passed 00:08:11.496 Test: dev_construct_null_lun ...[2024-07-24 10:33:38.044834] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:11.496 passed 00:08:11.496 Test: dev_construct_name_too_long ...passed 00:08:11.496 Test: dev_construct_success ...[2024-07-24 10:33:38.044884] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:11.496 [2024-07-24 10:33:38.044929] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:11.496 passed 00:08:11.496 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:11.496 Test: dev_queue_mgmt_task_success ...passed 00:08:11.496 Test: dev_queue_task_success ...passed 00:08:11.496 Test: dev_stop_success ...passed 00:08:11.496 Test: dev_add_port_max_ports ...[2024-07-24 10:33:38.045217] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:11.496 passed 00:08:11.496 Test: dev_add_port_construct_failure1 ...[2024-07-24 10:33:38.045322] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:11.496 passed 00:08:11.496 Test: dev_add_port_construct_failure2 ...[2024-07-24 10:33:38.045430] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:11.496 passed 00:08:11.496 Test: dev_add_port_success1 ...passed 00:08:11.496 Test: dev_add_port_success2 ...passed 00:08:11.496 Test: dev_add_port_success3 ...passed 00:08:11.496 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:11.496 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:11.496 Test: dev_find_port_by_id_success ...passed 00:08:11.496 Test: dev_add_lun_bdev_not_found ...passed 00:08:11.496 Test: dev_add_lun_no_free_lun_id ...[2024-07-24 10:33:38.045812] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:11.496 passed 00:08:11.496 Test: dev_add_lun_success1 ...passed 00:08:11.496 Test: dev_add_lun_success2 ...passed 00:08:11.496 Test: dev_check_pending_tasks ...passed 00:08:11.496 Test: dev_iterate_luns ...passed 00:08:11.496 Test: dev_find_free_lun ...passed 00:08:11.496 00:08:11.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.496 suites 1 1 n/a 0 0 00:08:11.496 tests 29 29 29 0 0 00:08:11.496 asserts 97 97 97 0 n/a 00:08:11.496 00:08:11.496 Elapsed time = 0.002 seconds 00:08:11.496 10:33:38 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:11.496 00:08:11.496 00:08:11.496 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.496 http://cunit.sourceforge.net/ 00:08:11.496 00:08:11.496 00:08:11.496 Suite: lun_suite 00:08:11.496 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-24 10:33:38.083936] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:11.496 passed 00:08:11.496 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-24 10:33:38.084901] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:11.496 passed 00:08:11.496 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:11.496 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:11.496 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-24 10:33:38.085389] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:11.496 passed 00:08:11.496 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:08:11.496 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:11.496 Test: lun_append_task_null_lun_not_supported ...passed 00:08:11.496 Test: lun_execute_scsi_task_pending ...passed 00:08:11.496 Test: lun_execute_scsi_task_complete ...passed 00:08:11.496 Test: lun_execute_scsi_task_resize ...passed 00:08:11.496 Test: lun_destruct_success ...passed 00:08:11.496 Test: lun_construct_null_ctx ...[2024-07-24 10:33:38.086237] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:11.496 passed 00:08:11.496 Test: lun_construct_success ...passed 00:08:11.496 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:08:11.496 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:11.496 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:11.496 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:11.496 00:08:11.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.496 suites 1 1 n/a 0 0 00:08:11.496 tests 18 18 18 0 0 00:08:11.496 asserts 153 153 153 0 n/a 00:08:11.496 00:08:11.496 Elapsed time = 0.004 seconds 00:08:11.496 10:33:38 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:11.496 00:08:11.496 00:08:11.496 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.496 http://cunit.sourceforge.net/ 00:08:11.496 00:08:11.496 00:08:11.496 Suite: scsi_suite 00:08:11.496 Test: scsi_init ...passed 00:08:11.496 00:08:11.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.496 suites 1 1 n/a 0 0 00:08:11.496 tests 1 1 1 0 0 00:08:11.496 asserts 1 1 1 0 n/a 00:08:11.496 00:08:11.496 Elapsed time = 0.000 seconds 00:08:11.496 10:33:38 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:11.496 00:08:11.496 00:08:11.496 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.496 http://cunit.sourceforge.net/ 00:08:11.496 00:08:11.496 00:08:11.496 Suite: translation_suite 00:08:11.496 Test: mode_select_6_test ...passed 00:08:11.496 Test: mode_select_6_test2 ...passed 00:08:11.496 Test: mode_sense_6_test ...passed 00:08:11.496 Test: mode_sense_10_test ...passed 00:08:11.496 Test: inquiry_evpd_test ...passed 00:08:11.496 Test: inquiry_standard_test ...passed 00:08:11.496 Test: inquiry_overflow_test ...passed 00:08:11.496 Test: task_complete_test ...passed 00:08:11.496 Test: lba_range_test ...passed 00:08:11.496 Test: xfer_len_test ...[2024-07-24 10:33:38.152224] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:11.496 passed 00:08:11.496 Test: xfer_test ...passed 00:08:11.496 Test: scsi_name_padding_test ...passed 00:08:11.496 Test: get_dif_ctx_test ...passed 00:08:11.496 Test: unmap_split_test ...passed 00:08:11.496 00:08:11.496 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.496 suites 1 1 n/a 0 0 00:08:11.496 tests 14 14 14 0 0 00:08:11.496 asserts 1200 1200 1200 0 n/a 00:08:11.496 00:08:11.496 Elapsed time = 0.006 seconds 00:08:11.496 10:33:38 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:11.755 00:08:11.755 00:08:11.755 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.755 http://cunit.sourceforge.net/ 00:08:11.755 00:08:11.755 00:08:11.755 Suite: reservation_suite 00:08:11.755 Test: test_reservation_register ...[2024-07-24 10:33:38.183664] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:11.755 passed 00:08:11.755 Test: test_reservation_reserve ...[2024-07-24 10:33:38.184097] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:11.755 [2024-07-24 10:33:38.184181] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:11.755 [2024-07-24 10:33:38.184299] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:11.755 passed 00:08:11.755 Test: test_reservation_preempt_non_all_regs ...[2024-07-24 10:33:38.184384] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:11.755 [2024-07-24 10:33:38.184468] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:11.755 passed 00:08:11.755 Test: test_reservation_preempt_all_regs ...[2024-07-24 10:33:38.184643] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:11.755 passed 00:08:11.755 Test: test_reservation_cmds_conflict ...[2024-07-24 10:33:38.184807] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:11.755 [2024-07-24 10:33:38.184884] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:11.755 [2024-07-24 10:33:38.184936] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:11.755 [2024-07-24 10:33:38.184979] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:11.755 [2024-07-24 10:33:38.185032] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:11.755 passed 00:08:11.755 Test: test_scsi2_reserve_release ...[2024-07-24 10:33:38.185078] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:11.755 passed 00:08:11.755 Test: test_pr_with_scsi2_reserve_release ...[2024-07-24 10:33:38.185190] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:11.755 passed 00:08:11.755 00:08:11.755 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.755 suites 1 1 n/a 0 0 00:08:11.755 tests 7 7 7 0 0 00:08:11.755 asserts 257 257 257 0 n/a 00:08:11.755 00:08:11.755 Elapsed time = 0.002 seconds 00:08:11.755 00:08:11.755 real 0m0.174s 00:08:11.755 user 0m0.099s 00:08:11.755 sys 0m0.074s 00:08:11.755 10:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.755 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:11.755 ************************************ 00:08:11.755 END TEST unittest_scsi 00:08:11.755 ************************************ 00:08:11.755 10:33:38 -- unit/unittest.sh@276 -- # uname -s 00:08:11.755 10:33:38 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:08:11.755 10:33:38 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:08:11.755 10:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.755 10:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.755 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:11.755 ************************************ 00:08:11.755 START TEST unittest_sock 00:08:11.755 ************************************ 00:08:11.755 10:33:38 -- common/autotest_common.sh@1104 -- # unittest_sock 00:08:11.755 10:33:38 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:11.755 00:08:11.755 00:08:11.755 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.755 http://cunit.sourceforge.net/ 00:08:11.755 00:08:11.755 00:08:11.755 Suite: sock 00:08:11.755 Test: posix_sock ...passed 00:08:11.755 Test: ut_sock ...passed 00:08:11.755 Test: posix_sock_group ...passed 00:08:11.755 Test: ut_sock_group ...passed 00:08:11.755 Test: posix_sock_group_fairness ...passed 00:08:11.755 Test: _posix_sock_close ...passed 00:08:11.755 Test: sock_get_default_opts ...passed 00:08:11.755 Test: ut_sock_impl_get_set_opts ...passed 00:08:11.755 Test: posix_sock_impl_get_set_opts ...passed 00:08:11.755 Test: ut_sock_map ...passed 00:08:11.756 Test: override_impl_opts ...passed 00:08:11.756 Test: ut_sock_group_get_ctx ...passed 00:08:11.756 00:08:11.756 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.756 suites 1 1 n/a 0 0 00:08:11.756 tests 12 12 12 0 0 00:08:11.756 asserts 349 349 349 0 n/a 00:08:11.756 00:08:11.756 Elapsed time = 0.009 seconds 00:08:11.756 10:33:38 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:11.756 00:08:11.756 00:08:11.756 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.756 http://cunit.sourceforge.net/ 00:08:11.756 00:08:11.756 00:08:11.756 Suite: posix 00:08:11.756 Test: flush ...passed 00:08:11.756 00:08:11.756 Run Summary: Type Total Ran Passed Failed Inactive 00:08:11.756 suites 1 1 n/a 0 0 00:08:11.756 tests 1 1 1 0 0 00:08:11.756 asserts 28 28 28 0 n/a 00:08:11.756 00:08:11.756 Elapsed time = 0.000 seconds 00:08:11.756 10:33:38 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:11.756 00:08:11.756 real 0m0.094s 00:08:11.756 user 0m0.027s 00:08:11.756 sys 0m0.044s 00:08:11.756 10:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.756 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:11.756 ************************************ 00:08:11.756 END TEST unittest_sock 00:08:11.756 ************************************ 00:08:11.756 10:33:38 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:11.756 10:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:11.756 10:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.756 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:11.756 ************************************ 00:08:11.756 START TEST unittest_thread 00:08:11.756 ************************************ 00:08:11.756 10:33:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:11.756 00:08:11.756 00:08:11.756 CUnit - A unit testing framework for C - Version 2.1-3 00:08:11.756 http://cunit.sourceforge.net/ 00:08:11.756 00:08:11.756 00:08:11.756 Suite: io_channel 00:08:11.756 Test: thread_alloc ...passed 00:08:12.015 Test: thread_send_msg ...passed 00:08:12.015 Test: thread_poller ...passed 00:08:12.015 Test: poller_pause ...passed 00:08:12.015 Test: thread_for_each ...passed 00:08:12.015 Test: for_each_channel_remove ...passed 00:08:12.015 Test: for_each_channel_unreg ...[2024-07-24 10:33:38.446250] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffea523bc30 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:12.015 passed 00:08:12.015 Test: thread_name ...passed 00:08:12.015 Test: channel ...[2024-07-24 10:33:38.450506] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x5587a737c0e0 00:08:12.015 passed 00:08:12.015 Test: channel_destroy_races ...passed 00:08:12.015 Test: thread_exit_test ...[2024-07-24 10:33:38.455631] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:12.015 passed 00:08:12.015 Test: thread_update_stats_test ...passed 00:08:12.015 Test: nested_channel ...passed 00:08:12.015 Test: device_unregister_and_thread_exit_race ...passed 00:08:12.015 Test: cache_closest_timed_poller ...passed 00:08:12.015 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:12.015 Test: io_device_lookup ...passed 00:08:12.015 Test: spdk_spin ...[2024-07-24 10:33:38.466519] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:12.015 [2024-07-24 10:33:38.466612] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffea523bc20 00:08:12.015 [2024-07-24 10:33:38.466740] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:12.015 [2024-07-24 10:33:38.468428] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:12.015 [2024-07-24 10:33:38.468519] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffea523bc20 00:08:12.015 [2024-07-24 10:33:38.468563] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:12.015 [2024-07-24 10:33:38.468604] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffea523bc20 00:08:12.015 [2024-07-24 10:33:38.468639] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:12.015 [2024-07-24 10:33:38.468687] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffea523bc20 00:08:12.015 [2024-07-24 10:33:38.468753] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:12.015 [2024-07-24 10:33:38.468819] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffea523bc20 00:08:12.015 passed 00:08:12.015 Test: for_each_channel_and_thread_exit_race ...passed 00:08:12.015 Test: for_each_thread_and_thread_exit_race ...passed 00:08:12.015 00:08:12.015 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.015 suites 1 1 n/a 0 0 00:08:12.015 tests 20 20 20 0 0 00:08:12.015 asserts 409 409 409 0 n/a 00:08:12.015 00:08:12.015 Elapsed time = 0.050 seconds 00:08:12.015 00:08:12.015 real 0m0.093s 00:08:12.015 user 0m0.065s 00:08:12.015 sys 0m0.029s 00:08:12.015 10:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.015 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:12.015 ************************************ 00:08:12.015 END TEST unittest_thread 00:08:12.015 ************************************ 00:08:12.015 10:33:38 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:12.015 10:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:12.015 10:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.015 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:12.015 ************************************ 00:08:12.015 START TEST unittest_iobuf 00:08:12.015 ************************************ 00:08:12.015 10:33:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:12.015 00:08:12.015 00:08:12.015 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.015 http://cunit.sourceforge.net/ 00:08:12.015 00:08:12.015 00:08:12.015 Suite: io_channel 00:08:12.015 Test: iobuf ...passed 00:08:12.015 Test: iobuf_cache ...[2024-07-24 10:33:38.572750] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:12.015 [2024-07-24 10:33:38.573160] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:12.015 [2024-07-24 10:33:38.573332] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:12.015 [2024-07-24 10:33:38.573397] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:12.015 [2024-07-24 10:33:38.573483] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:12.015 [2024-07-24 10:33:38.573544] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:12.015 passed 00:08:12.015 00:08:12.015 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.015 suites 1 1 n/a 0 0 00:08:12.015 tests 2 2 2 0 0 00:08:12.015 asserts 107 107 107 0 n/a 00:08:12.015 00:08:12.015 Elapsed time = 0.006 seconds 00:08:12.015 00:08:12.015 real 0m0.044s 00:08:12.015 user 0m0.040s 00:08:12.015 sys 0m0.005s 00:08:12.015 10:33:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.015 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:12.015 ************************************ 00:08:12.015 END TEST unittest_iobuf 00:08:12.015 ************************************ 00:08:12.015 10:33:38 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:08:12.015 10:33:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:12.015 10:33:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.015 10:33:38 -- common/autotest_common.sh@10 -- # set +x 00:08:12.015 ************************************ 00:08:12.015 START TEST unittest_util 00:08:12.015 ************************************ 00:08:12.015 10:33:38 -- common/autotest_common.sh@1104 -- # unittest_util 00:08:12.015 10:33:38 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:12.015 00:08:12.015 00:08:12.015 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.015 http://cunit.sourceforge.net/ 00:08:12.015 00:08:12.015 00:08:12.015 Suite: base64 00:08:12.015 Test: test_base64_get_encoded_strlen ...passed 00:08:12.015 Test: test_base64_get_decoded_len ...passed 00:08:12.015 Test: test_base64_encode ...passed 00:08:12.015 Test: test_base64_decode ...passed 00:08:12.015 Test: test_base64_urlsafe_encode ...passed 00:08:12.016 Test: test_base64_urlsafe_decode ...passed 00:08:12.016 00:08:12.016 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.016 suites 1 1 n/a 0 0 00:08:12.016 tests 6 6 6 0 0 00:08:12.016 asserts 112 112 112 0 n/a 00:08:12.016 00:08:12.016 Elapsed time = 0.000 seconds 00:08:12.016 10:33:38 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:12.275 00:08:12.275 00:08:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.275 http://cunit.sourceforge.net/ 00:08:12.275 00:08:12.275 00:08:12.275 Suite: bit_array 00:08:12.275 Test: test_1bit ...passed 00:08:12.275 Test: test_64bit ...passed 00:08:12.275 Test: test_find ...passed 00:08:12.275 Test: test_resize ...passed 00:08:12.275 Test: test_errors ...passed 00:08:12.275 Test: test_count ...passed 00:08:12.275 Test: test_mask_store_load ...passed 00:08:12.275 Test: test_mask_clear ...passed 00:08:12.275 00:08:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.275 suites 1 1 n/a 0 0 00:08:12.275 tests 8 8 8 0 0 00:08:12.275 asserts 5075 5075 5075 0 n/a 00:08:12.275 00:08:12.275 Elapsed time = 0.002 seconds 00:08:12.275 10:33:38 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:12.275 00:08:12.275 00:08:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.275 http://cunit.sourceforge.net/ 00:08:12.275 00:08:12.275 00:08:12.275 Suite: cpuset 00:08:12.275 Test: test_cpuset ...passed 00:08:12.275 Test: test_cpuset_parse ...[2024-07-24 10:33:38.731138] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:12.275 [2024-07-24 10:33:38.731497] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:12.275 [2024-07-24 10:33:38.731620] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:12.275 [2024-07-24 10:33:38.731714] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:12.275 [2024-07-24 10:33:38.731755] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:12.275 [2024-07-24 10:33:38.731803] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:12.275 [2024-07-24 10:33:38.731843] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:12.275 [2024-07-24 10:33:38.731901] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:12.275 passed 00:08:12.275 Test: test_cpuset_fmt ...passed 00:08:12.275 00:08:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.275 suites 1 1 n/a 0 0 00:08:12.275 tests 3 3 3 0 0 00:08:12.275 asserts 65 65 65 0 n/a 00:08:12.275 00:08:12.275 Elapsed time = 0.002 seconds 00:08:12.275 10:33:38 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:12.275 00:08:12.275 00:08:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.275 http://cunit.sourceforge.net/ 00:08:12.275 00:08:12.275 00:08:12.275 Suite: crc16 00:08:12.275 Test: test_crc16_t10dif ...passed 00:08:12.275 Test: test_crc16_t10dif_seed ...passed 00:08:12.275 Test: test_crc16_t10dif_copy ...passed 00:08:12.275 00:08:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.275 suites 1 1 n/a 0 0 00:08:12.275 tests 3 3 3 0 0 00:08:12.275 asserts 5 5 5 0 n/a 00:08:12.275 00:08:12.275 Elapsed time = 0.000 seconds 00:08:12.275 10:33:38 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:12.275 00:08:12.275 00:08:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.275 http://cunit.sourceforge.net/ 00:08:12.275 00:08:12.275 00:08:12.275 Suite: crc32_ieee 00:08:12.275 Test: test_crc32_ieee ...passed 00:08:12.275 00:08:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.275 suites 1 1 n/a 0 0 00:08:12.275 tests 1 1 1 0 0 00:08:12.275 asserts 1 1 1 0 n/a 00:08:12.275 00:08:12.275 Elapsed time = 0.000 seconds 00:08:12.275 10:33:38 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:12.275 00:08:12.275 00:08:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.275 http://cunit.sourceforge.net/ 00:08:12.275 00:08:12.275 00:08:12.275 Suite: crc32c 00:08:12.275 Test: test_crc32c ...passed 00:08:12.275 Test: test_crc32c_nvme ...passed 00:08:12.275 00:08:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.275 suites 1 1 n/a 0 0 00:08:12.275 tests 2 2 2 0 0 00:08:12.275 asserts 16 16 16 0 n/a 00:08:12.275 00:08:12.275 Elapsed time = 0.000 seconds 00:08:12.275 10:33:38 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:12.275 00:08:12.275 00:08:12.275 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.275 http://cunit.sourceforge.net/ 00:08:12.275 00:08:12.275 00:08:12.275 Suite: crc64 00:08:12.275 Test: test_crc64_nvme ...passed 00:08:12.275 00:08:12.275 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.275 suites 1 1 n/a 0 0 00:08:12.275 tests 1 1 1 0 0 00:08:12.275 asserts 4 4 4 0 n/a 00:08:12.275 00:08:12.275 Elapsed time = 0.000 seconds 00:08:12.276 10:33:38 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:12.276 00:08:12.276 00:08:12.276 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.276 http://cunit.sourceforge.net/ 00:08:12.276 00:08:12.276 00:08:12.276 Suite: string 00:08:12.276 Test: test_parse_ip_addr ...passed 00:08:12.276 Test: test_str_chomp ...passed 00:08:12.276 Test: test_parse_capacity ...passed 00:08:12.276 Test: test_sprintf_append_realloc ...passed 00:08:12.276 Test: test_strtol ...passed 00:08:12.276 Test: test_strtoll ...passed 00:08:12.276 Test: test_strarray ...passed 00:08:12.276 Test: test_strcpy_replace ...passed 00:08:12.276 00:08:12.276 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.276 suites 1 1 n/a 0 0 00:08:12.276 tests 8 8 8 0 0 00:08:12.276 asserts 161 161 161 0 n/a 00:08:12.276 00:08:12.276 Elapsed time = 0.001 seconds 00:08:12.276 10:33:38 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:12.276 00:08:12.276 00:08:12.276 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.276 http://cunit.sourceforge.net/ 00:08:12.276 00:08:12.276 00:08:12.276 Suite: dif 00:08:12.276 Test: dif_generate_and_verify_test ...[2024-07-24 10:33:38.918838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:12.276 [2024-07-24 10:33:38.919379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:12.276 [2024-07-24 10:33:38.919699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:12.276 [2024-07-24 10:33:38.920011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:12.276 [2024-07-24 10:33:38.920296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:12.276 [2024-07-24 10:33:38.920589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:12.276 passed 00:08:12.276 Test: dif_disable_check_test ...[2024-07-24 10:33:38.921678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:12.276 [2024-07-24 10:33:38.922044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:12.276 [2024-07-24 10:33:38.922336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:12.276 passed 00:08:12.276 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-24 10:33:38.923397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:12.276 [2024-07-24 10:33:38.923735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:12.276 [2024-07-24 10:33:38.924070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:12.276 [2024-07-24 10:33:38.924439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:12.276 [2024-07-24 10:33:38.924792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:12.276 [2024-07-24 10:33:38.925116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:12.276 [2024-07-24 10:33:38.925433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:12.276 [2024-07-24 10:33:38.925741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:12.276 [2024-07-24 10:33:38.926078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:12.276 [2024-07-24 10:33:38.926448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:12.276 [2024-07-24 10:33:38.926792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:12.276 passed 00:08:12.276 Test: dif_apptag_mask_test ...[2024-07-24 10:33:38.927119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:12.276 [2024-07-24 10:33:38.927432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:12.276 passed 00:08:12.276 Test: dif_sec_512_md_0_error_test ...[2024-07-24 10:33:38.927759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:12.276 passed 00:08:12.276 Test: dif_sec_4096_md_0_error_test ...[2024-07-24 10:33:38.927816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:12.276 [2024-07-24 10:33:38.927866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:12.276 passed 00:08:12.276 Test: dif_sec_4100_md_128_error_test ...[2024-07-24 10:33:38.927926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:12.276 passed 00:08:12.276 Test: dif_guard_seed_test ...[2024-07-24 10:33:38.927976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:12.276 passed 00:08:12.276 Test: dif_guard_value_test ...passed 00:08:12.276 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:12.276 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:12.276 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:12.276 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:12.276 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:12.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:12.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:12.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:12.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:12.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:12.537 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:12.537 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-24 10:33:38.973151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd48, Actual=fd4c 00:08:12.537 [2024-07-24 10:33:38.975722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe25, Actual=fe21 00:08:12.537 [2024-07-24 10:33:38.978314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:08:12.537 [2024-07-24 10:33:38.980959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:08:12.537 [2024-07-24 10:33:38.983545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4005d 00:08:12.537 [2024-07-24 10:33:38.986049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=4005d 00:08:12.537 [2024-07-24 10:33:38.988545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=1f5d 00:08:12.537 [2024-07-24 10:33:38.990422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=567d 00:08:12.537 [2024-07-24 10:33:38.992316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab353ed, Actual=1ab753ed 00:08:12.537 [2024-07-24 10:33:38.994806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38534660, Actual=38574660 00:08:12.537 [2024-07-24 10:33:38.997363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:08:12.537 [2024-07-24 10:33:38.999902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=8c 00:08:12.537 [2024-07-24 10:33:39.002472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=400000000005d 00:08:12.537 [2024-07-24 10:33:39.005068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.537 [2024-07-24 10:33:39.007607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=c25a2342 00:08:12.537 [2024-07-24 10:33:39.009557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=f4630d12 00:08:12.537 [2024-07-24 10:33:39.011526] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.537 [2024-07-24 10:33:39.014029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d48378266, Actual=88010a2d4837a266 00:08:12.537 [2024-07-24 10:33:39.016537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.537 [2024-07-24 10:33:39.019027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.537 [2024-07-24 10:33:39.021525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=20000000005d 00:08:12.537 [2024-07-24 10:33:39.024018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=20000000005d 00:08:12.537 [2024-07-24 10:33:39.026545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.537 [2024-07-24 10:33:39.028468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=a187dc20edfe3f6c 00:08:12.537 passed 00:08:12.537 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-24 10:33:39.029495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:12.537 [2024-07-24 10:33:39.029808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:12.537 [2024-07-24 10:33:39.030117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.537 [2024-07-24 10:33:39.030447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.537 [2024-07-24 10:33:39.030793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.537 [2024-07-24 10:33:39.031093] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.537 [2024-07-24 10:33:39.031399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1480 00:08:12.537 [2024-07-24 10:33:39.031692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8b1e 00:08:12.537 [2024-07-24 10:33:39.031986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab773ed, Actual=1ab753ed 00:08:12.537 [2024-07-24 10:33:39.032296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38576660, Actual=38574660 00:08:12.537 [2024-07-24 10:33:39.032632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.537 [2024-07-24 10:33:39.032958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.033274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.033578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.033884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c25a2342 00:08:12.538 [2024-07-24 10:33:39.034158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f4630d12 00:08:12.538 [2024-07-24 10:33:39.034489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.538 [2024-07-24 10:33:39.034798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48378266, Actual=88010a2d4837a266 00:08:12.538 [2024-07-24 10:33:39.035108] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.035407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.035738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.538 [2024-07-24 10:33:39.036038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.538 [2024-07-24 10:33:39.036365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.538 [2024-07-24 10:33:39.036639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=a187dc20edfe3f6c 00:08:12.538 passed 00:08:12.538 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-24 10:33:39.036988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:12.538 [2024-07-24 10:33:39.037308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:12.538 [2024-07-24 10:33:39.037613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.037930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.038258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.038579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.038890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1480 00:08:12.538 [2024-07-24 10:33:39.039167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8b1e 00:08:12.538 [2024-07-24 10:33:39.039433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab773ed, Actual=1ab753ed 00:08:12.538 [2024-07-24 10:33:39.039763] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38576660, Actual=38574660 00:08:12.538 [2024-07-24 10:33:39.040076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.040378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.040690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.041018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.041330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c25a2342 00:08:12.538 [2024-07-24 10:33:39.041609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f4630d12 00:08:12.538 [2024-07-24 10:33:39.041905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.538 [2024-07-24 10:33:39.042221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48378266, Actual=88010a2d4837a266 00:08:12.538 [2024-07-24 10:33:39.042555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.042868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.043182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.538 [2024-07-24 10:33:39.043490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.538 [2024-07-24 10:33:39.043848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.538 [2024-07-24 10:33:39.044120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=a187dc20edfe3f6c 00:08:12.538 passed 00:08:12.538 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-24 10:33:39.044441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:12.538 [2024-07-24 10:33:39.044785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:12.538 [2024-07-24 10:33:39.045104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.045410] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.045748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.046072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.046395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1480 00:08:12.538 [2024-07-24 10:33:39.046667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8b1e 00:08:12.538 [2024-07-24 10:33:39.046938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab773ed, Actual=1ab753ed 00:08:12.538 [2024-07-24 10:33:39.047243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38576660, Actual=38574660 00:08:12.538 [2024-07-24 10:33:39.047595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.047909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.048220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.048537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.048858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c25a2342 00:08:12.538 [2024-07-24 10:33:39.049142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f4630d12 00:08:12.538 [2024-07-24 10:33:39.049424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.538 [2024-07-24 10:33:39.049736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48378266, Actual=88010a2d4837a266 00:08:12.538 [2024-07-24 10:33:39.050052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.050375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.050689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.538 [2024-07-24 10:33:39.051000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.538 [2024-07-24 10:33:39.051330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.538 [2024-07-24 10:33:39.051625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=a187dc20edfe3f6c 00:08:12.538 passed 00:08:12.538 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-24 10:33:39.051983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:12.538 [2024-07-24 10:33:39.052298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:12.538 [2024-07-24 10:33:39.052609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.052931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.053260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.053565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.053876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1480 00:08:12.538 [2024-07-24 10:33:39.054166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8b1e 00:08:12.538 passed 00:08:12.538 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-24 10:33:39.054516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab773ed, Actual=1ab753ed 00:08:12.538 [2024-07-24 10:33:39.054819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38576660, Actual=38574660 00:08:12.538 [2024-07-24 10:33:39.055148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.055452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.538 [2024-07-24 10:33:39.055784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.538 [2024-07-24 10:33:39.056089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.539 [2024-07-24 10:33:39.056399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c25a2342 00:08:12.539 [2024-07-24 10:33:39.056672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f4630d12 00:08:12.539 [2024-07-24 10:33:39.057022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.539 [2024-07-24 10:33:39.057332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48378266, Actual=88010a2d4837a266 00:08:12.539 [2024-07-24 10:33:39.057635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.057946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.058260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.539 [2024-07-24 10:33:39.058588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.539 [2024-07-24 10:33:39.058916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.539 [2024-07-24 10:33:39.059197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=a187dc20edfe3f6c 00:08:12.539 passed 00:08:12.539 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-24 10:33:39.059555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=dd4c, Actual=fd4c 00:08:12.539 [2024-07-24 10:33:39.059862] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=de21, Actual=fe21 00:08:12.539 [2024-07-24 10:33:39.060167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.060472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.060812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.539 [2024-07-24 10:33:39.061131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.539 [2024-07-24 10:33:39.061439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=1480 00:08:12.539 [2024-07-24 10:33:39.061710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=8b1e 00:08:12.539 passed 00:08:12.539 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-24 10:33:39.062033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab773ed, Actual=1ab753ed 00:08:12.539 [2024-07-24 10:33:39.062355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38576660, Actual=38574660 00:08:12.539 [2024-07-24 10:33:39.062685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.062990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.063306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.539 [2024-07-24 10:33:39.063639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=2058 00:08:12.539 [2024-07-24 10:33:39.063954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c25a2342 00:08:12.539 [2024-07-24 10:33:39.064218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=f4630d12 00:08:12.539 [2024-07-24 10:33:39.064558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.539 [2024-07-24 10:33:39.064886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d48378266, Actual=88010a2d4837a266 00:08:12.539 [2024-07-24 10:33:39.065203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.065501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.065798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.539 [2024-07-24 10:33:39.066113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200000000058 00:08:12.539 [2024-07-24 10:33:39.066454] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.539 [2024-07-24 10:33:39.066735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=a187dc20edfe3f6c 00:08:12.539 passed 00:08:12.539 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:12.539 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:12.539 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:12.539 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-24 10:33:39.111573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=dd4c, Actual=fd4c 00:08:12.539 [2024-07-24 10:33:39.112775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=54b2, Actual=74b2 00:08:12.539 [2024-07-24 10:33:39.114032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.115212] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.116373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.539 [2024-07-24 10:33:39.117519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.539 [2024-07-24 10:33:39.118652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=1480 00:08:12.539 [2024-07-24 10:33:39.119957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=847e 00:08:12.539 [2024-07-24 10:33:39.121199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab773ed, Actual=1ab753ed 00:08:12.539 [2024-07-24 10:33:39.122347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=963609d2, Actual=963629d2 00:08:12.539 [2024-07-24 10:33:39.123493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.124696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.125848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.539 [2024-07-24 10:33:39.126999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.539 [2024-07-24 10:33:39.128186] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=c25a2342 00:08:12.539 [2024-07-24 10:33:39.129403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=96092eea 00:08:12.539 [2024-07-24 10:33:39.130555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.539 [2024-07-24 10:33:39.131773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=b25e103d3f17be52, Actual=b25e103d3f179e52 00:08:12.539 [2024-07-24 10:33:39.133006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.134264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.135411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=20000000005d 00:08:12.539 [2024-07-24 10:33:39.136582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=20000000005d 00:08:12.539 [2024-07-24 10:33:39.137724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.539 [2024-07-24 10:33:39.138907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=944c39f5ca622c3a 00:08:12.539 passed 00:08:12.539 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-24 10:33:39.139305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=dd4c, Actual=fd4c 00:08:12.539 [2024-07-24 10:33:39.139619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3733, Actual=1733 00:08:12.539 [2024-07-24 10:33:39.139911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.140196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.140530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.539 [2024-07-24 10:33:39.140865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.539 [2024-07-24 10:33:39.141211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=1480 00:08:12.539 [2024-07-24 10:33:39.141509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=e7ff 00:08:12.539 [2024-07-24 10:33:39.141795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab773ed, Actual=1ab753ed 00:08:12.539 [2024-07-24 10:33:39.142089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=57b63c27, Actual=57b61c27 00:08:12.539 [2024-07-24 10:33:39.142390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.142708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.539 [2024-07-24 10:33:39.143007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.540 [2024-07-24 10:33:39.143324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.540 [2024-07-24 10:33:39.143626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=c25a2342 00:08:12.540 [2024-07-24 10:33:39.143921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=57891b1f 00:08:12.540 [2024-07-24 10:33:39.144249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.540 [2024-07-24 10:33:39.144542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=47bc1fae19f83a0d, Actual=47bc1fae19f81a0d 00:08:12.540 [2024-07-24 10:33:39.144853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.145148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.145441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200000000059 00:08:12.540 [2024-07-24 10:33:39.145718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200000000059 00:08:12.540 [2024-07-24 10:33:39.146022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.540 passed 00:08:12.540 Test: dix_sec_512_md_0_error ...[2024-07-24 10:33:39.146314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=61ae3666ec8da865 00:08:12.540 [2024-07-24 10:33:39.146406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:12.540 passed 00:08:12.540 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:08:12.540 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:12.540 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:12.540 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:12.540 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:12.540 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:12.540 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:12.540 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:12.540 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:12.540 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-24 10:33:39.190784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=dd4c, Actual=fd4c 00:08:12.540 [2024-07-24 10:33:39.191989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=54b2, Actual=74b2 00:08:12.540 [2024-07-24 10:33:39.193153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.194266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.195422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.540 [2024-07-24 10:33:39.196582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.540 [2024-07-24 10:33:39.197718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=1480 00:08:12.540 [2024-07-24 10:33:39.198855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=847e 00:08:12.540 [2024-07-24 10:33:39.199986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab773ed, Actual=1ab753ed 00:08:12.540 [2024-07-24 10:33:39.201127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=963609d2, Actual=963629d2 00:08:12.540 [2024-07-24 10:33:39.202282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.203399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.204553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.540 [2024-07-24 10:33:39.205701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=205d 00:08:12.540 [2024-07-24 10:33:39.206827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=c25a2342 00:08:12.540 [2024-07-24 10:33:39.207973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=96092eea 00:08:12.540 [2024-07-24 10:33:39.209175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.540 [2024-07-24 10:33:39.210291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=b25e103d3f17be52, Actual=b25e103d3f179e52 00:08:12.540 [2024-07-24 10:33:39.211406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.212562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=2088 00:08:12.540 [2024-07-24 10:33:39.213709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=20000000005d 00:08:12.799 [2024-07-24 10:33:39.214828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=20000000005d 00:08:12.799 [2024-07-24 10:33:39.216017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.799 passed 00:08:12.799 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-24 10:33:39.217157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=944c39f5ca622c3a 00:08:12.799 [2024-07-24 10:33:39.217565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=dd4c, Actual=fd4c 00:08:12.799 [2024-07-24 10:33:39.217845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=3733, Actual=1733 00:08:12.799 [2024-07-24 10:33:39.218129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.799 [2024-07-24 10:33:39.218420] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.799 [2024-07-24 10:33:39.218730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.799 [2024-07-24 10:33:39.219005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.799 [2024-07-24 10:33:39.219274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=1480 00:08:12.799 [2024-07-24 10:33:39.219570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=e7ff 00:08:12.799 [2024-07-24 10:33:39.219853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab773ed, Actual=1ab753ed 00:08:12.799 [2024-07-24 10:33:39.220141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=57b63c27, Actual=57b61c27 00:08:12.799 [2024-07-24 10:33:39.220441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.799 [2024-07-24 10:33:39.220729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.799 [2024-07-24 10:33:39.221026] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.799 [2024-07-24 10:33:39.221313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=2059 00:08:12.799 [2024-07-24 10:33:39.221574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=c25a2342 00:08:12.799 [2024-07-24 10:33:39.221852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=57891b1f 00:08:12.799 [2024-07-24 10:33:39.222140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc00d3, Actual=a576a7728ecc20d3 00:08:12.800 [2024-07-24 10:33:39.222423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=47bc1fae19f83a0d, Actual=47bc1fae19f81a0d 00:08:12.800 [2024-07-24 10:33:39.222684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.800 [2024-07-24 10:33:39.222974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=2088 00:08:12.800 [2024-07-24 10:33:39.223244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200000000059 00:08:12.800 [2024-07-24 10:33:39.223541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=200000000059 00:08:12.800 [2024-07-24 10:33:39.223818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=4302d8c3853286e6 00:08:12.800 [2024-07-24 10:33:39.224105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=61ae3666ec8da865 00:08:12.800 passed 00:08:12.800 Test: set_md_interleave_iovs_test ...passed 00:08:12.800 Test: set_md_interleave_iovs_split_test ...passed 00:08:12.800 Test: dif_generate_stream_pi_16_test ...passed 00:08:12.800 Test: dif_generate_stream_test ...passed 00:08:12.800 Test: set_md_interleave_iovs_alignment_test ...[2024-07-24 10:33:39.231680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:12.800 passed 00:08:12.800 Test: dif_generate_split_test ...passed 00:08:12.800 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:12.800 Test: dif_verify_split_test ...passed 00:08:12.800 Test: dif_verify_stream_multi_segments_test ...passed 00:08:12.800 Test: update_crc32c_pi_16_test ...passed 00:08:12.800 Test: update_crc32c_test ...passed 00:08:12.800 Test: dif_update_crc32c_split_test ...passed 00:08:12.800 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:12.800 Test: get_range_with_md_test ...passed 00:08:12.800 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:12.800 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:12.800 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:12.800 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:12.800 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:12.800 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:12.800 Test: dif_generate_and_verify_unmap_test ...passed 00:08:12.800 00:08:12.800 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.800 suites 1 1 n/a 0 0 00:08:12.800 tests 79 79 79 0 0 00:08:12.800 asserts 3584 3584 3584 0 n/a 00:08:12.800 00:08:12.800 Elapsed time = 0.366 seconds 00:08:12.800 10:33:39 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:12.800 00:08:12.800 00:08:12.800 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.800 http://cunit.sourceforge.net/ 00:08:12.800 00:08:12.800 00:08:12.800 Suite: iov 00:08:12.800 Test: test_single_iov ...passed 00:08:12.800 Test: test_simple_iov ...passed 00:08:12.800 Test: test_complex_iov ...passed 00:08:12.800 Test: test_iovs_to_buf ...passed 00:08:12.800 Test: test_buf_to_iovs ...passed 00:08:12.800 Test: test_memset ...passed 00:08:12.800 Test: test_iov_one ...passed 00:08:12.800 Test: test_iov_xfer ...passed 00:08:12.800 00:08:12.800 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.800 suites 1 1 n/a 0 0 00:08:12.800 tests 8 8 8 0 0 00:08:12.800 asserts 156 156 156 0 n/a 00:08:12.800 00:08:12.800 Elapsed time = 0.000 seconds 00:08:12.800 10:33:39 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:12.800 00:08:12.800 00:08:12.800 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.800 http://cunit.sourceforge.net/ 00:08:12.800 00:08:12.800 00:08:12.800 Suite: math 00:08:12.800 Test: test_serial_number_arithmetic ...passed 00:08:12.800 Suite: erase 00:08:12.800 Test: test_memset_s ...passed 00:08:12.800 00:08:12.800 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.800 suites 2 2 n/a 0 0 00:08:12.800 tests 2 2 2 0 0 00:08:12.800 asserts 18 18 18 0 n/a 00:08:12.800 00:08:12.800 Elapsed time = 0.000 seconds 00:08:12.800 10:33:39 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:12.800 00:08:12.800 00:08:12.800 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.800 http://cunit.sourceforge.net/ 00:08:12.800 00:08:12.800 00:08:12.800 Suite: pipe 00:08:12.800 Test: test_create_destroy ...passed 00:08:12.800 Test: test_write_get_buffer ...passed 00:08:12.800 Test: test_write_advance ...passed 00:08:12.800 Test: test_read_get_buffer ...passed 00:08:12.800 Test: test_read_advance ...passed 00:08:12.800 Test: test_data ...passed 00:08:12.800 00:08:12.800 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.800 suites 1 1 n/a 0 0 00:08:12.800 tests 6 6 6 0 0 00:08:12.800 asserts 250 250 250 0 n/a 00:08:12.800 00:08:12.800 Elapsed time = 0.000 seconds 00:08:12.800 10:33:39 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:12.800 00:08:12.800 00:08:12.800 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.800 http://cunit.sourceforge.net/ 00:08:12.800 00:08:12.800 00:08:12.800 Suite: xor 00:08:12.800 Test: test_xor_gen ...passed 00:08:12.800 00:08:12.800 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.800 suites 1 1 n/a 0 0 00:08:12.800 tests 1 1 1 0 0 00:08:12.800 asserts 17 17 17 0 n/a 00:08:12.800 00:08:12.800 Elapsed time = 0.007 seconds 00:08:12.800 00:08:12.800 real 0m0.785s 00:08:12.800 user 0m0.574s 00:08:12.800 sys 0m0.214s 00:08:12.800 10:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.800 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:12.800 ************************************ 00:08:12.800 END TEST unittest_util 00:08:12.800 ************************************ 00:08:12.800 10:33:39 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:13.060 10:33:39 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:13.060 10:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.060 10:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.060 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.060 ************************************ 00:08:13.060 START TEST unittest_vhost 00:08:13.060 ************************************ 00:08:13.060 10:33:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:13.060 00:08:13.060 00:08:13.060 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.060 http://cunit.sourceforge.net/ 00:08:13.060 00:08:13.060 00:08:13.060 Suite: vhost_suite 00:08:13.060 Test: desc_to_iov_test ...[2024-07-24 10:33:39.511844] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:13.060 passed 00:08:13.060 Test: create_controller_test ...[2024-07-24 10:33:39.516529] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:13.060 [2024-07-24 10:33:39.516663] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:13.060 [2024-07-24 10:33:39.516840] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:13.060 [2024-07-24 10:33:39.516984] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:13.060 [2024-07-24 10:33:39.517109] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:13.060 [2024-07-24 10:33:39.517285] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-24 10:33:39.518416] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:13.060 passed 00:08:13.060 Test: session_find_by_vid_test ...passed 00:08:13.060 Test: remove_controller_test ...[2024-07-24 10:33:39.520727] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:13.060 passed 00:08:13.060 Test: vq_avail_ring_get_test ...passed 00:08:13.060 Test: vq_packed_ring_test ...passed 00:08:13.060 Test: vhost_blk_construct_test ...passed 00:08:13.060 00:08:13.060 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.060 suites 1 1 n/a 0 0 00:08:13.060 tests 7 7 7 0 0 00:08:13.060 asserts 145 145 145 0 n/a 00:08:13.060 00:08:13.060 Elapsed time = 0.013 seconds 00:08:13.060 00:08:13.060 real 0m0.052s 00:08:13.060 user 0m0.033s 00:08:13.060 sys 0m0.020s 00:08:13.060 10:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.060 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.060 ************************************ 00:08:13.060 END TEST unittest_vhost 00:08:13.060 ************************************ 00:08:13.060 10:33:39 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:13.060 10:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.060 10:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.061 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 ************************************ 00:08:13.061 START TEST unittest_dma 00:08:13.061 ************************************ 00:08:13.061 10:33:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:13.061 00:08:13.061 00:08:13.061 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.061 http://cunit.sourceforge.net/ 00:08:13.061 00:08:13.061 00:08:13.061 Suite: dma_suite 00:08:13.061 Test: test_dma ...[2024-07-24 10:33:39.606483] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:13.061 passed 00:08:13.061 00:08:13.061 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.061 suites 1 1 n/a 0 0 00:08:13.061 tests 1 1 1 0 0 00:08:13.061 asserts 50 50 50 0 n/a 00:08:13.061 00:08:13.061 Elapsed time = 0.001 seconds 00:08:13.061 00:08:13.061 real 0m0.033s 00:08:13.061 user 0m0.012s 00:08:13.061 sys 0m0.021s 00:08:13.061 10:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.061 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 ************************************ 00:08:13.061 END TEST unittest_dma 00:08:13.061 ************************************ 00:08:13.061 10:33:39 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:08:13.061 10:33:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.061 10:33:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.061 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 ************************************ 00:08:13.061 START TEST unittest_init 00:08:13.061 ************************************ 00:08:13.061 10:33:39 -- common/autotest_common.sh@1104 -- # unittest_init 00:08:13.061 10:33:39 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:13.061 00:08:13.061 00:08:13.061 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.061 http://cunit.sourceforge.net/ 00:08:13.061 00:08:13.061 00:08:13.061 Suite: subsystem_suite 00:08:13.061 Test: subsystem_sort_test_depends_on_single ...passed 00:08:13.061 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:13.061 Test: subsystem_sort_test_missing_dependency ...[2024-07-24 10:33:39.697757] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:13.061 [2024-07-24 10:33:39.698197] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:13.061 passed 00:08:13.061 00:08:13.061 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.061 suites 1 1 n/a 0 0 00:08:13.061 tests 3 3 3 0 0 00:08:13.061 asserts 20 20 20 0 n/a 00:08:13.061 00:08:13.061 Elapsed time = 0.001 seconds 00:08:13.061 00:08:13.061 real 0m0.040s 00:08:13.061 user 0m0.021s 00:08:13.061 sys 0m0.020s 00:08:13.061 10:33:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.061 10:33:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.061 ************************************ 00:08:13.061 END TEST unittest_init 00:08:13.061 ************************************ 00:08:13.406 10:33:39 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:08:13.406 10:33:39 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:13.406 10:33:39 -- unit/unittest.sh@290 -- # hostname 00:08:13.406 10:33:39 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:13.406 geninfo: WARNING: invalid characters removed from testname! 00:08:45.540 10:34:08 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:46.916 10:34:13 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:50.211 10:34:16 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:52.739 10:34:19 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:56.025 10:34:22 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:58.555 10:34:24 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:01.084 10:34:27 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:03.615 10:34:30 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:03.615 10:34:30 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:04.553 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:04.553 Found 309 entries. 00:09:04.553 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:04.553 Writing .css and .png files. 00:09:04.553 Generating output. 00:09:04.553 Processing file include/linux/virtio_ring.h 00:09:04.812 Processing file include/spdk/base64.h 00:09:04.812 Processing file include/spdk/trace.h 00:09:04.812 Processing file include/spdk/util.h 00:09:04.812 Processing file include/spdk/histogram_data.h 00:09:04.812 Processing file include/spdk/endian.h 00:09:04.812 Processing file include/spdk/mmio.h 00:09:04.812 Processing file include/spdk/bdev_module.h 00:09:04.812 Processing file include/spdk/thread.h 00:09:04.812 Processing file include/spdk/nvme.h 00:09:04.812 Processing file include/spdk/nvmf_transport.h 00:09:04.812 Processing file include/spdk/nvme_spec.h 00:09:04.812 Processing file include/spdk_internal/sgl.h 00:09:04.812 Processing file include/spdk_internal/utf.h 00:09:04.812 Processing file include/spdk_internal/virtio.h 00:09:04.812 Processing file include/spdk_internal/rdma.h 00:09:04.812 Processing file include/spdk_internal/sock.h 00:09:04.812 Processing file include/spdk_internal/nvme_tcp.h 00:09:05.070 Processing file lib/accel/accel.c 00:09:05.070 Processing file lib/accel/accel_sw.c 00:09:05.070 Processing file lib/accel/accel_rpc.c 00:09:05.328 Processing file lib/bdev/scsi_nvme.c 00:09:05.328 Processing file lib/bdev/bdev_rpc.c 00:09:05.328 Processing file lib/bdev/part.c 00:09:05.328 Processing file lib/bdev/bdev.c 00:09:05.328 Processing file lib/bdev/bdev_zone.c 00:09:05.586 Processing file lib/blob/blob_bs_dev.c 00:09:05.586 Processing file lib/blob/blobstore.h 00:09:05.586 Processing file lib/blob/request.c 00:09:05.586 Processing file lib/blob/zeroes.c 00:09:05.586 Processing file lib/blob/blobstore.c 00:09:05.586 Processing file lib/blobfs/tree.c 00:09:05.586 Processing file lib/blobfs/blobfs.c 00:09:05.843 Processing file lib/conf/conf.c 00:09:05.843 Processing file lib/dma/dma.c 00:09:06.101 Processing file lib/env_dpdk/pci_event.c 00:09:06.101 Processing file lib/env_dpdk/pci_idxd.c 00:09:06.101 Processing file lib/env_dpdk/pci.c 00:09:06.101 Processing file lib/env_dpdk/pci_dpdk.c 00:09:06.101 Processing file lib/env_dpdk/pci_ioat.c 00:09:06.101 Processing file lib/env_dpdk/sigbus_handler.c 00:09:06.101 Processing file lib/env_dpdk/init.c 00:09:06.101 Processing file lib/env_dpdk/threads.c 00:09:06.101 Processing file lib/env_dpdk/pci_vmd.c 00:09:06.101 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:06.101 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:06.101 Processing file lib/env_dpdk/pci_virtio.c 00:09:06.101 Processing file lib/env_dpdk/memory.c 00:09:06.101 Processing file lib/env_dpdk/env.c 00:09:06.101 Processing file lib/event/scheduler_static.c 00:09:06.101 Processing file lib/event/reactor.c 00:09:06.101 Processing file lib/event/app.c 00:09:06.101 Processing file lib/event/app_rpc.c 00:09:06.101 Processing file lib/event/log_rpc.c 00:09:06.668 Processing file lib/ftl/ftl_reloc.c 00:09:06.668 Processing file lib/ftl/ftl_l2p.c 00:09:06.668 Processing file lib/ftl/ftl_layout.c 00:09:06.668 Processing file lib/ftl/ftl_l2p_cache.c 00:09:06.668 Processing file lib/ftl/ftl_io.h 00:09:06.668 Processing file lib/ftl/ftl_init.c 00:09:06.668 Processing file lib/ftl/ftl_debug.c 00:09:06.668 Processing file lib/ftl/ftl_rq.c 00:09:06.668 Processing file lib/ftl/ftl_core.c 00:09:06.668 Processing file lib/ftl/ftl_core.h 00:09:06.668 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:06.668 Processing file lib/ftl/ftl_l2p_flat.c 00:09:06.668 Processing file lib/ftl/ftl_p2l.c 00:09:06.668 Processing file lib/ftl/ftl_band.h 00:09:06.668 Processing file lib/ftl/ftl_nv_cache.c 00:09:06.668 Processing file lib/ftl/ftl_writer.h 00:09:06.668 Processing file lib/ftl/ftl_band.c 00:09:06.668 Processing file lib/ftl/ftl_debug.h 00:09:06.668 Processing file lib/ftl/ftl_band_ops.c 00:09:06.668 Processing file lib/ftl/ftl_io.c 00:09:06.668 Processing file lib/ftl/ftl_writer.c 00:09:06.668 Processing file lib/ftl/ftl_nv_cache.h 00:09:06.668 Processing file lib/ftl/ftl_trace.c 00:09:06.668 Processing file lib/ftl/ftl_sb.c 00:09:06.668 Processing file lib/ftl/base/ftl_base_dev.c 00:09:06.668 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:06.925 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:06.925 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:06.925 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:07.182 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:07.182 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:07.182 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:07.182 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:07.182 Processing file lib/ftl/utils/ftl_mempool.c 00:09:07.182 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:07.182 Processing file lib/ftl/utils/ftl_df.h 00:09:07.182 Processing file lib/ftl/utils/ftl_md.c 00:09:07.182 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:07.182 Processing file lib/ftl/utils/ftl_property.c 00:09:07.182 Processing file lib/ftl/utils/ftl_conf.c 00:09:07.182 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:07.182 Processing file lib/ftl/utils/ftl_property.h 00:09:07.457 Processing file lib/idxd/idxd.c 00:09:07.457 Processing file lib/idxd/idxd_user.c 00:09:07.457 Processing file lib/idxd/idxd_internal.h 00:09:07.457 Processing file lib/init/subsystem.c 00:09:07.457 Processing file lib/init/json_config.c 00:09:07.457 Processing file lib/init/subsystem_rpc.c 00:09:07.457 Processing file lib/init/rpc.c 00:09:07.457 Processing file lib/ioat/ioat.c 00:09:07.457 Processing file lib/ioat/ioat_internal.h 00:09:08.023 Processing file lib/iscsi/iscsi.c 00:09:08.023 Processing file lib/iscsi/conn.c 00:09:08.023 Processing file lib/iscsi/tgt_node.c 00:09:08.023 Processing file lib/iscsi/param.c 00:09:08.023 Processing file lib/iscsi/md5.c 00:09:08.023 Processing file lib/iscsi/iscsi_rpc.c 00:09:08.023 Processing file lib/iscsi/init_grp.c 00:09:08.023 Processing file lib/iscsi/iscsi.h 00:09:08.023 Processing file lib/iscsi/task.h 00:09:08.023 Processing file lib/iscsi/task.c 00:09:08.023 Processing file lib/iscsi/iscsi_subsystem.c 00:09:08.023 Processing file lib/iscsi/portal_grp.c 00:09:08.023 Processing file lib/json/json_parse.c 00:09:08.023 Processing file lib/json/json_write.c 00:09:08.023 Processing file lib/json/json_util.c 00:09:08.023 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:08.023 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:08.023 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:08.023 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:08.023 Processing file lib/log/log_deprecated.c 00:09:08.023 Processing file lib/log/log.c 00:09:08.023 Processing file lib/log/log_flags.c 00:09:08.282 Processing file lib/lvol/lvol.c 00:09:08.282 Processing file lib/nbd/nbd_rpc.c 00:09:08.282 Processing file lib/nbd/nbd.c 00:09:08.282 Processing file lib/notify/notify_rpc.c 00:09:08.282 Processing file lib/notify/notify.c 00:09:09.215 Processing file lib/nvme/nvme_fabric.c 00:09:09.215 Processing file lib/nvme/nvme_transport.c 00:09:09.215 Processing file lib/nvme/nvme_ns.c 00:09:09.215 Processing file lib/nvme/nvme_pcie_internal.h 00:09:09.215 Processing file lib/nvme/nvme_rdma.c 00:09:09.215 Processing file lib/nvme/nvme_discovery.c 00:09:09.215 Processing file lib/nvme/nvme_qpair.c 00:09:09.215 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:09.215 Processing file lib/nvme/nvme_quirks.c 00:09:09.215 Processing file lib/nvme/nvme_ctrlr.c 00:09:09.215 Processing file lib/nvme/nvme_io_msg.c 00:09:09.215 Processing file lib/nvme/nvme_internal.h 00:09:09.215 Processing file lib/nvme/nvme_zns.c 00:09:09.215 Processing file lib/nvme/nvme_tcp.c 00:09:09.215 Processing file lib/nvme/nvme_pcie_common.c 00:09:09.215 Processing file lib/nvme/nvme_opal.c 00:09:09.215 Processing file lib/nvme/nvme_poll_group.c 00:09:09.215 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:09.215 Processing file lib/nvme/nvme.c 00:09:09.215 Processing file lib/nvme/nvme_cuse.c 00:09:09.215 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:09.215 Processing file lib/nvme/nvme_ns_cmd.c 00:09:09.215 Processing file lib/nvme/nvme_vfio_user.c 00:09:09.215 Processing file lib/nvme/nvme_pcie.c 00:09:09.474 Processing file lib/nvmf/ctrlr_bdev.c 00:09:09.474 Processing file lib/nvmf/ctrlr_discovery.c 00:09:09.474 Processing file lib/nvmf/ctrlr.c 00:09:09.474 Processing file lib/nvmf/rdma.c 00:09:09.474 Processing file lib/nvmf/transport.c 00:09:09.474 Processing file lib/nvmf/tcp.c 00:09:09.474 Processing file lib/nvmf/subsystem.c 00:09:09.474 Processing file lib/nvmf/nvmf.c 00:09:09.474 Processing file lib/nvmf/nvmf_internal.h 00:09:09.474 Processing file lib/nvmf/nvmf_rpc.c 00:09:09.731 Processing file lib/rdma/rdma_verbs.c 00:09:09.731 Processing file lib/rdma/common.c 00:09:09.731 Processing file lib/rpc/rpc.c 00:09:09.990 Processing file lib/scsi/scsi_bdev.c 00:09:09.990 Processing file lib/scsi/dev.c 00:09:09.990 Processing file lib/scsi/scsi.c 00:09:09.990 Processing file lib/scsi/scsi_pr.c 00:09:09.990 Processing file lib/scsi/lun.c 00:09:09.990 Processing file lib/scsi/task.c 00:09:09.990 Processing file lib/scsi/scsi_rpc.c 00:09:09.990 Processing file lib/scsi/port.c 00:09:09.990 Processing file lib/sock/sock.c 00:09:09.990 Processing file lib/sock/sock_rpc.c 00:09:09.990 Processing file lib/thread/thread.c 00:09:09.990 Processing file lib/thread/iobuf.c 00:09:10.248 Processing file lib/trace/trace_rpc.c 00:09:10.248 Processing file lib/trace/trace.c 00:09:10.248 Processing file lib/trace/trace_flags.c 00:09:10.248 Processing file lib/trace_parser/trace.cpp 00:09:10.248 Processing file lib/ut/ut.c 00:09:10.248 Processing file lib/ut_mock/mock.c 00:09:10.823 Processing file lib/util/file.c 00:09:10.823 Processing file lib/util/crc32.c 00:09:10.823 Processing file lib/util/fd_group.c 00:09:10.823 Processing file lib/util/crc64.c 00:09:10.823 Processing file lib/util/strerror_tls.c 00:09:10.823 Processing file lib/util/xor.c 00:09:10.823 Processing file lib/util/pipe.c 00:09:10.823 Processing file lib/util/hexlify.c 00:09:10.823 Processing file lib/util/crc16.c 00:09:10.823 Processing file lib/util/string.c 00:09:10.823 Processing file lib/util/cpuset.c 00:09:10.823 Processing file lib/util/fd.c 00:09:10.823 Processing file lib/util/crc32c.c 00:09:10.823 Processing file lib/util/uuid.c 00:09:10.823 Processing file lib/util/dif.c 00:09:10.823 Processing file lib/util/iov.c 00:09:10.823 Processing file lib/util/zipf.c 00:09:10.823 Processing file lib/util/bit_array.c 00:09:10.823 Processing file lib/util/math.c 00:09:10.823 Processing file lib/util/crc32_ieee.c 00:09:10.823 Processing file lib/util/base64.c 00:09:10.823 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:10.823 Processing file lib/vfio_user/host/vfio_user.c 00:09:11.087 Processing file lib/vhost/vhost_rpc.c 00:09:11.087 Processing file lib/vhost/vhost_internal.h 00:09:11.087 Processing file lib/vhost/rte_vhost_user.c 00:09:11.087 Processing file lib/vhost/vhost_blk.c 00:09:11.087 Processing file lib/vhost/vhost_scsi.c 00:09:11.087 Processing file lib/vhost/vhost.c 00:09:11.087 Processing file lib/virtio/virtio.c 00:09:11.087 Processing file lib/virtio/virtio_pci.c 00:09:11.087 Processing file lib/virtio/virtio_vfio_user.c 00:09:11.087 Processing file lib/virtio/virtio_vhost_user.c 00:09:11.345 Processing file lib/vmd/led.c 00:09:11.345 Processing file lib/vmd/vmd.c 00:09:11.345 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:11.345 Processing file module/accel/dsa/accel_dsa.c 00:09:11.345 Processing file module/accel/error/accel_error.c 00:09:11.345 Processing file module/accel/error/accel_error_rpc.c 00:09:11.345 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:11.346 Processing file module/accel/iaa/accel_iaa.c 00:09:11.346 Processing file module/accel/ioat/accel_ioat.c 00:09:11.346 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:11.604 Processing file module/bdev/aio/bdev_aio.c 00:09:11.604 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:11.604 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:11.604 Processing file module/bdev/delay/vbdev_delay.c 00:09:11.604 Processing file module/bdev/error/vbdev_error.c 00:09:11.604 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:11.862 Processing file module/bdev/ftl/bdev_ftl.c 00:09:11.862 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:11.862 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:11.862 Processing file module/bdev/gpt/gpt.h 00:09:11.862 Processing file module/bdev/gpt/gpt.c 00:09:11.862 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:11.862 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:12.121 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:12.121 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:12.121 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:12.121 Processing file module/bdev/malloc/bdev_malloc.c 00:09:12.121 Processing file module/bdev/null/bdev_null_rpc.c 00:09:12.121 Processing file module/bdev/null/bdev_null.c 00:09:12.380 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:12.380 Processing file module/bdev/nvme/nvme_rpc.c 00:09:12.380 Processing file module/bdev/nvme/bdev_nvme.c 00:09:12.380 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:12.380 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:12.380 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:12.380 Processing file module/bdev/nvme/vbdev_opal.c 00:09:12.639 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:12.639 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:12.639 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:12.639 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:12.639 Processing file module/bdev/raid/raid1.c 00:09:12.639 Processing file module/bdev/raid/bdev_raid.h 00:09:12.639 Processing file module/bdev/raid/raid0.c 00:09:12.639 Processing file module/bdev/raid/concat.c 00:09:12.639 Processing file module/bdev/raid/bdev_raid.c 00:09:12.639 Processing file module/bdev/raid/raid5f.c 00:09:12.897 Processing file module/bdev/split/vbdev_split.c 00:09:12.897 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:12.897 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:12.897 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:12.897 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:12.897 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:12.897 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:12.897 Processing file module/blob/bdev/blob_bdev.c 00:09:13.156 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:13.156 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:13.156 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:13.156 Processing file module/event/subsystems/accel/accel.c 00:09:13.156 Processing file module/event/subsystems/bdev/bdev.c 00:09:13.414 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:13.414 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:13.414 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:13.414 Processing file module/event/subsystems/nbd/nbd.c 00:09:13.414 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:13.414 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:13.414 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:13.672 Processing file module/event/subsystems/scsi/scsi.c 00:09:13.672 Processing file module/event/subsystems/sock/sock.c 00:09:13.672 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:13.672 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:13.672 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:13.672 Processing file module/event/subsystems/vmd/vmd.c 00:09:13.930 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:13.930 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:13.930 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:13.930 Processing file module/sock/sock_kernel.h 00:09:14.188 Processing file module/sock/posix/posix.c 00:09:14.188 Writing directory view page. 00:09:14.188 Overall coverage rate: 00:09:14.188 lines......: 39.1% (39266 of 100422 lines) 00:09:14.188 functions..: 42.8% (3587 of 8384 functions) 00:09:14.188 00:09:14.188 00:09:14.188 ===================== 00:09:14.188 All unit tests passed 00:09:14.188 ===================== 00:09:14.188 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:14.188 10:34:40 -- unit/unittest.sh@302 -- # set +x 00:09:14.188 00:09:14.188 00:09:14.188 00:09:14.188 real 3m24.360s 00:09:14.188 user 2m58.714s 00:09:14.188 sys 0m16.929s 00:09:14.188 10:34:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.188 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.188 ************************************ 00:09:14.188 END TEST unittest 00:09:14.188 ************************************ 00:09:14.188 10:34:40 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:09:14.188 10:34:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:09:14.188 10:34:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:09:14.188 10:34:40 -- spdk/autotest.sh@173 -- # timing_enter lib 00:09:14.188 10:34:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:14.188 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.188 10:34:40 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:14.188 10:34:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.188 10:34:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.188 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.188 ************************************ 00:09:14.188 START TEST env 00:09:14.188 ************************************ 00:09:14.188 10:34:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:14.188 * Looking for test storage... 00:09:14.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:14.188 10:34:40 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:14.188 10:34:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.188 10:34:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.188 10:34:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.188 ************************************ 00:09:14.188 START TEST env_memory 00:09:14.188 ************************************ 00:09:14.188 10:34:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:14.188 00:09:14.188 00:09:14.188 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.188 http://cunit.sourceforge.net/ 00:09:14.188 00:09:14.188 00:09:14.188 Suite: memory 00:09:14.446 Test: alloc and free memory map ...[2024-07-24 10:34:40.883678] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:14.446 passed 00:09:14.446 Test: mem map translation ...[2024-07-24 10:34:40.935631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:14.446 [2024-07-24 10:34:40.935957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:14.446 [2024-07-24 10:34:40.936168] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:14.446 [2024-07-24 10:34:40.936365] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:14.446 passed 00:09:14.447 Test: mem map registration ...[2024-07-24 10:34:41.004743] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:14.447 [2024-07-24 10:34:41.005015] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:14.447 passed 00:09:14.447 Test: mem map adjacent registrations ...passed 00:09:14.447 00:09:14.447 Run Summary: Type Total Ran Passed Failed Inactive 00:09:14.447 suites 1 1 n/a 0 0 00:09:14.447 tests 4 4 4 0 0 00:09:14.447 asserts 152 152 152 0 n/a 00:09:14.447 00:09:14.447 Elapsed time = 0.261 seconds 00:09:14.447 00:09:14.447 real 0m0.295s 00:09:14.447 user 0m0.277s 00:09:14.447 sys 0m0.016s 00:09:14.447 10:34:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.447 10:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:14.447 ************************************ 00:09:14.447 END TEST env_memory 00:09:14.447 ************************************ 00:09:14.705 10:34:41 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:14.705 10:34:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.705 10:34:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.705 10:34:41 -- common/autotest_common.sh@10 -- # set +x 00:09:14.705 ************************************ 00:09:14.705 START TEST env_vtophys 00:09:14.705 ************************************ 00:09:14.705 10:34:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:14.705 EAL: lib.eal log level changed from notice to debug 00:09:14.705 EAL: Detected lcore 0 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 1 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 2 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 3 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 4 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 5 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 6 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 7 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 8 as core 0 on socket 0 00:09:14.705 EAL: Detected lcore 9 as core 0 on socket 0 00:09:14.705 EAL: Maximum logical cores by configuration: 128 00:09:14.705 EAL: Detected CPU lcores: 10 00:09:14.705 EAL: Detected NUMA nodes: 1 00:09:14.705 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:09:14.705 EAL: Checking presence of .so 'librte_eal.so.23' 00:09:14.705 EAL: Checking presence of .so 'librte_eal.so' 00:09:14.705 EAL: Detected static linkage of DPDK 00:09:14.705 EAL: No shared files mode enabled, IPC will be disabled 00:09:14.705 EAL: Selected IOVA mode 'PA' 00:09:14.705 EAL: Probing VFIO support... 00:09:14.705 EAL: IOMMU type 1 (Type 1) is supported 00:09:14.705 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:14.705 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:14.705 EAL: VFIO support initialized 00:09:14.705 EAL: Ask a virtual area of 0x2e000 bytes 00:09:14.705 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:14.705 EAL: Setting up physically contiguous memory... 00:09:14.705 EAL: Setting maximum number of open files to 1048576 00:09:14.705 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:14.705 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:14.705 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.705 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:14.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.705 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.705 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:14.705 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:14.705 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.705 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:14.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.705 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.705 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:14.705 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:14.705 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.705 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:14.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.705 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.705 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:14.705 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:14.705 EAL: Ask a virtual area of 0x61000 bytes 00:09:14.705 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:14.705 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:14.705 EAL: Ask a virtual area of 0x400000000 bytes 00:09:14.705 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:14.705 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:14.705 EAL: Hugepages will be freed exactly as allocated. 00:09:14.705 EAL: No shared files mode enabled, IPC is disabled 00:09:14.705 EAL: No shared files mode enabled, IPC is disabled 00:09:14.705 EAL: TSC frequency is ~2200000 KHz 00:09:14.705 EAL: Main lcore 0 is ready (tid=7fef3eb9ca80;cpuset=[0]) 00:09:14.705 EAL: Trying to obtain current memory policy. 00:09:14.705 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:14.705 EAL: Restoring previous memory policy: 0 00:09:14.705 EAL: request: mp_malloc_sync 00:09:14.705 EAL: No shared files mode enabled, IPC is disabled 00:09:14.705 EAL: Heap on socket 0 was expanded by 2MB 00:09:14.705 EAL: No shared files mode enabled, IPC is disabled 00:09:14.705 EAL: Mem event callback 'spdk:(nil)' registered 00:09:14.705 00:09:14.705 00:09:14.705 CUnit - A unit testing framework for C - Version 2.1-3 00:09:14.705 http://cunit.sourceforge.net/ 00:09:14.705 00:09:14.705 00:09:14.705 Suite: components_suite 00:09:15.271 Test: vtophys_malloc_test ...passed 00:09:15.271 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.271 EAL: Restoring previous memory policy: 0 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was expanded by 4MB 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was shrunk by 4MB 00:09:15.271 EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.271 EAL: Restoring previous memory policy: 0 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was expanded by 6MB 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was shrunk by 6MB 00:09:15.271 EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.271 EAL: Restoring previous memory policy: 0 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was expanded by 10MB 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was shrunk by 10MB 00:09:15.271 EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.271 EAL: Restoring previous memory policy: 0 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was expanded by 18MB 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was shrunk by 18MB 00:09:15.271 EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.271 EAL: Restoring previous memory policy: 0 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was expanded by 34MB 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was shrunk by 34MB 00:09:15.271 EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.271 EAL: Restoring previous memory policy: 0 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was expanded by 66MB 00:09:15.271 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.271 EAL: request: mp_malloc_sync 00:09:15.271 EAL: No shared files mode enabled, IPC is disabled 00:09:15.271 EAL: Heap on socket 0 was shrunk by 66MB 00:09:15.271 EAL: Trying to obtain current memory policy. 00:09:15.271 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.529 EAL: Restoring previous memory policy: 0 00:09:15.529 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.529 EAL: request: mp_malloc_sync 00:09:15.529 EAL: No shared files mode enabled, IPC is disabled 00:09:15.529 EAL: Heap on socket 0 was expanded by 130MB 00:09:15.529 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.529 EAL: request: mp_malloc_sync 00:09:15.529 EAL: No shared files mode enabled, IPC is disabled 00:09:15.529 EAL: Heap on socket 0 was shrunk by 130MB 00:09:15.529 EAL: Trying to obtain current memory policy. 00:09:15.529 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.529 EAL: Restoring previous memory policy: 0 00:09:15.529 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.529 EAL: request: mp_malloc_sync 00:09:15.529 EAL: No shared files mode enabled, IPC is disabled 00:09:15.529 EAL: Heap on socket 0 was expanded by 258MB 00:09:15.529 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.787 EAL: request: mp_malloc_sync 00:09:15.787 EAL: No shared files mode enabled, IPC is disabled 00:09:15.787 EAL: Heap on socket 0 was shrunk by 258MB 00:09:15.787 EAL: Trying to obtain current memory policy. 00:09:15.787 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:15.787 EAL: Restoring previous memory policy: 0 00:09:15.787 EAL: Calling mem event callback 'spdk:(nil)' 00:09:15.787 EAL: request: mp_malloc_sync 00:09:15.787 EAL: No shared files mode enabled, IPC is disabled 00:09:15.787 EAL: Heap on socket 0 was expanded by 514MB 00:09:16.045 EAL: Calling mem event callback 'spdk:(nil)' 00:09:16.302 EAL: request: mp_malloc_sync 00:09:16.302 EAL: No shared files mode enabled, IPC is disabled 00:09:16.302 EAL: Heap on socket 0 was shrunk by 514MB 00:09:16.302 EAL: Trying to obtain current memory policy. 00:09:16.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:16.562 EAL: Restoring previous memory policy: 0 00:09:16.562 EAL: Calling mem event callback 'spdk:(nil)' 00:09:16.562 EAL: request: mp_malloc_sync 00:09:16.562 EAL: No shared files mode enabled, IPC is disabled 00:09:16.562 EAL: Heap on socket 0 was expanded by 1026MB 00:09:16.820 EAL: Calling mem event callback 'spdk:(nil)' 00:09:17.078 EAL: request: mp_malloc_sync 00:09:17.078 EAL: No shared files mode enabled, IPC is disabled 00:09:17.078 passed 00:09:17.078 00:09:17.078 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:17.078 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.078 suites 1 1 n/a 0 0 00:09:17.078 tests 2 2 2 0 0 00:09:17.078 asserts 6359 6359 6359 0 n/a 00:09:17.078 00:09:17.078 Elapsed time = 2.342 seconds 00:09:17.078 EAL: Calling mem event callback 'spdk:(nil)' 00:09:17.078 EAL: request: mp_malloc_sync 00:09:17.078 EAL: No shared files mode enabled, IPC is disabled 00:09:17.078 EAL: Heap on socket 0 was shrunk by 2MB 00:09:17.078 EAL: No shared files mode enabled, IPC is disabled 00:09:17.078 EAL: No shared files mode enabled, IPC is disabled 00:09:17.078 EAL: No shared files mode enabled, IPC is disabled 00:09:17.078 00:09:17.078 real 0m2.586s 00:09:17.078 user 0m1.381s 00:09:17.078 sys 0m1.077s 00:09:17.078 10:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.078 10:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:17.078 ************************************ 00:09:17.078 END TEST env_vtophys 00:09:17.078 ************************************ 00:09:17.336 10:34:43 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:17.336 10:34:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:17.336 10:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.336 10:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:17.336 ************************************ 00:09:17.336 START TEST env_pci 00:09:17.336 ************************************ 00:09:17.336 10:34:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:17.336 00:09:17.336 00:09:17.336 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.336 http://cunit.sourceforge.net/ 00:09:17.336 00:09:17.336 00:09:17.336 Suite: pci 00:09:17.336 Test: pci_hook ...[2024-07-24 10:34:43.820867] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 115062 has claimed it 00:09:17.336 passed 00:09:17.336 00:09:17.336 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.336 suites 1 1 n/a 0 0 00:09:17.336 tests 1 1 1 0 0 00:09:17.336 asserts 25 25 25 0 n/a 00:09:17.336 00:09:17.336 Elapsed time = 0.004 seconds 00:09:17.336 EAL: Cannot find device (10000:00:01.0) 00:09:17.336 EAL: Failed to attach device on primary process 00:09:17.336 00:09:17.336 real 0m0.061s 00:09:17.336 user 0m0.016s 00:09:17.336 sys 0m0.045s 00:09:17.336 10:34:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.336 10:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:17.336 ************************************ 00:09:17.336 END TEST env_pci 00:09:17.336 ************************************ 00:09:17.336 10:34:43 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:17.336 10:34:43 -- env/env.sh@15 -- # uname 00:09:17.336 10:34:43 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:17.336 10:34:43 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:17.336 10:34:43 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:17.336 10:34:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:17.336 10:34:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.336 10:34:43 -- common/autotest_common.sh@10 -- # set +x 00:09:17.336 ************************************ 00:09:17.336 START TEST env_dpdk_post_init 00:09:17.336 ************************************ 00:09:17.336 10:34:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:17.336 EAL: Detected CPU lcores: 10 00:09:17.336 EAL: Detected NUMA nodes: 1 00:09:17.336 EAL: Detected static linkage of DPDK 00:09:17.336 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:17.336 EAL: Selected IOVA mode 'PA' 00:09:17.336 EAL: VFIO support initialized 00:09:17.594 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:17.594 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:09:17.594 Starting DPDK initialization... 00:09:17.594 Starting SPDK post initialization... 00:09:17.594 SPDK NVMe probe 00:09:17.594 Attaching to 0000:00:06.0 00:09:17.594 Attached to 0000:00:06.0 00:09:17.594 Cleaning up... 00:09:17.594 00:09:17.594 real 0m0.238s 00:09:17.594 user 0m0.069s 00:09:17.594 sys 0m0.071s 00:09:17.594 10:34:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.594 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:09:17.594 ************************************ 00:09:17.594 END TEST env_dpdk_post_init 00:09:17.595 ************************************ 00:09:17.595 10:34:44 -- env/env.sh@26 -- # uname 00:09:17.595 10:34:44 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:17.595 10:34:44 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:17.595 10:34:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:17.595 10:34:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.595 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:09:17.595 ************************************ 00:09:17.595 START TEST env_mem_callbacks 00:09:17.595 ************************************ 00:09:17.595 10:34:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:17.595 EAL: Detected CPU lcores: 10 00:09:17.595 EAL: Detected NUMA nodes: 1 00:09:17.595 EAL: Detected static linkage of DPDK 00:09:17.595 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:17.595 EAL: Selected IOVA mode 'PA' 00:09:17.595 EAL: VFIO support initialized 00:09:17.853 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:17.853 00:09:17.853 00:09:17.853 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.853 http://cunit.sourceforge.net/ 00:09:17.853 00:09:17.853 00:09:17.853 Suite: memory 00:09:17.853 Test: test ... 00:09:17.853 register 0x200000200000 2097152 00:09:17.853 malloc 3145728 00:09:17.853 register 0x200000400000 4194304 00:09:17.853 buf 0x200000500000 len 3145728 PASSED 00:09:17.853 malloc 64 00:09:17.853 buf 0x2000004fff40 len 64 PASSED 00:09:17.853 malloc 4194304 00:09:17.853 register 0x200000800000 6291456 00:09:17.853 buf 0x200000a00000 len 4194304 PASSED 00:09:17.853 free 0x200000500000 3145728 00:09:17.853 free 0x2000004fff40 64 00:09:17.853 unregister 0x200000400000 4194304 PASSED 00:09:17.853 free 0x200000a00000 4194304 00:09:17.853 unregister 0x200000800000 6291456 PASSED 00:09:17.853 malloc 8388608 00:09:17.853 register 0x200000400000 10485760 00:09:17.853 buf 0x200000600000 len 8388608 PASSED 00:09:17.853 free 0x200000600000 8388608 00:09:17.853 unregister 0x200000400000 10485760 PASSED 00:09:17.853 passed 00:09:17.853 00:09:17.853 Run Summary: Type Total Ran Passed Failed Inactive 00:09:17.853 suites 1 1 n/a 0 0 00:09:17.853 tests 1 1 1 0 0 00:09:17.853 asserts 15 15 15 0 n/a 00:09:17.853 00:09:17.853 Elapsed time = 0.009 seconds 00:09:17.853 00:09:17.853 real 0m0.201s 00:09:17.853 user 0m0.050s 00:09:17.853 sys 0m0.050s 00:09:17.853 10:34:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.853 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:09:17.853 ************************************ 00:09:17.854 END TEST env_mem_callbacks 00:09:17.854 ************************************ 00:09:17.854 00:09:17.854 real 0m3.704s 00:09:17.854 user 0m1.978s 00:09:17.854 sys 0m1.393s 00:09:17.854 10:34:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.854 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:09:17.854 ************************************ 00:09:17.854 END TEST env 00:09:17.854 ************************************ 00:09:17.854 10:34:44 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:17.854 10:34:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:17.854 10:34:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:17.854 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:09:17.854 ************************************ 00:09:17.854 START TEST rpc 00:09:17.854 ************************************ 00:09:17.854 10:34:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:18.112 * Looking for test storage... 00:09:18.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:18.112 10:34:44 -- rpc/rpc.sh@65 -- # spdk_pid=115176 00:09:18.112 10:34:44 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:18.112 10:34:44 -- rpc/rpc.sh@67 -- # waitforlisten 115176 00:09:18.112 10:34:44 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:18.112 10:34:44 -- common/autotest_common.sh@819 -- # '[' -z 115176 ']' 00:09:18.112 10:34:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.112 10:34:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:18.112 10:34:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.112 10:34:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:18.112 10:34:44 -- common/autotest_common.sh@10 -- # set +x 00:09:18.112 [2024-07-24 10:34:44.661208] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:18.112 [2024-07-24 10:34:44.661652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115176 ] 00:09:18.371 [2024-07-24 10:34:44.806460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.371 [2024-07-24 10:34:44.930653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:18.371 [2024-07-24 10:34:44.930966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:18.371 [2024-07-24 10:34:44.931013] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 115176' to capture a snapshot of events at runtime. 00:09:18.371 [2024-07-24 10:34:44.931073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid115176 for offline analysis/debug. 00:09:18.371 [2024-07-24 10:34:44.931205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.939 10:34:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:18.939 10:34:45 -- common/autotest_common.sh@852 -- # return 0 00:09:18.939 10:34:45 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:18.939 10:34:45 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:18.939 10:34:45 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:18.939 10:34:45 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:18.939 10:34:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:18.939 10:34:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:18.939 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.939 ************************************ 00:09:18.939 START TEST rpc_integrity 00:09:18.939 ************************************ 00:09:18.939 10:34:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:09:18.939 10:34:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:18.939 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:18.939 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:18.939 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:18.939 10:34:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:18.939 10:34:45 -- rpc/rpc.sh@13 -- # jq length 00:09:19.198 10:34:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:19.198 10:34:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:19.198 10:34:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:19.198 { 00:09:19.198 "name": "Malloc0", 00:09:19.198 "aliases": [ 00:09:19.198 "db3cddbf-d1cf-49f8-90b3-0128de455b94" 00:09:19.198 ], 00:09:19.198 "product_name": "Malloc disk", 00:09:19.198 "block_size": 512, 00:09:19.198 "num_blocks": 16384, 00:09:19.198 "uuid": "db3cddbf-d1cf-49f8-90b3-0128de455b94", 00:09:19.198 "assigned_rate_limits": { 00:09:19.198 "rw_ios_per_sec": 0, 00:09:19.198 "rw_mbytes_per_sec": 0, 00:09:19.198 "r_mbytes_per_sec": 0, 00:09:19.198 "w_mbytes_per_sec": 0 00:09:19.198 }, 00:09:19.198 "claimed": false, 00:09:19.198 "zoned": false, 00:09:19.198 "supported_io_types": { 00:09:19.198 "read": true, 00:09:19.198 "write": true, 00:09:19.198 "unmap": true, 00:09:19.198 "write_zeroes": true, 00:09:19.198 "flush": true, 00:09:19.198 "reset": true, 00:09:19.198 "compare": false, 00:09:19.198 "compare_and_write": false, 00:09:19.198 "abort": true, 00:09:19.198 "nvme_admin": false, 00:09:19.198 "nvme_io": false 00:09:19.198 }, 00:09:19.198 "memory_domains": [ 00:09:19.198 { 00:09:19.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.198 "dma_device_type": 2 00:09:19.198 } 00:09:19.198 ], 00:09:19.198 "driver_specific": {} 00:09:19.198 } 00:09:19.198 ]' 00:09:19.198 10:34:45 -- rpc/rpc.sh@17 -- # jq length 00:09:19.198 10:34:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:19.198 10:34:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 [2024-07-24 10:34:45.730132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:19.198 [2024-07-24 10:34:45.730276] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.198 [2024-07-24 10:34:45.730331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006080 00:09:19.198 [2024-07-24 10:34:45.730385] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.198 [2024-07-24 10:34:45.733929] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.198 [2024-07-24 10:34:45.734018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:19.198 Passthru0 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:19.198 { 00:09:19.198 "name": "Malloc0", 00:09:19.198 "aliases": [ 00:09:19.198 "db3cddbf-d1cf-49f8-90b3-0128de455b94" 00:09:19.198 ], 00:09:19.198 "product_name": "Malloc disk", 00:09:19.198 "block_size": 512, 00:09:19.198 "num_blocks": 16384, 00:09:19.198 "uuid": "db3cddbf-d1cf-49f8-90b3-0128de455b94", 00:09:19.198 "assigned_rate_limits": { 00:09:19.198 "rw_ios_per_sec": 0, 00:09:19.198 "rw_mbytes_per_sec": 0, 00:09:19.198 "r_mbytes_per_sec": 0, 00:09:19.198 "w_mbytes_per_sec": 0 00:09:19.198 }, 00:09:19.198 "claimed": true, 00:09:19.198 "claim_type": "exclusive_write", 00:09:19.198 "zoned": false, 00:09:19.198 "supported_io_types": { 00:09:19.198 "read": true, 00:09:19.198 "write": true, 00:09:19.198 "unmap": true, 00:09:19.198 "write_zeroes": true, 00:09:19.198 "flush": true, 00:09:19.198 "reset": true, 00:09:19.198 "compare": false, 00:09:19.198 "compare_and_write": false, 00:09:19.198 "abort": true, 00:09:19.198 "nvme_admin": false, 00:09:19.198 "nvme_io": false 00:09:19.198 }, 00:09:19.198 "memory_domains": [ 00:09:19.198 { 00:09:19.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.198 "dma_device_type": 2 00:09:19.198 } 00:09:19.198 ], 00:09:19.198 "driver_specific": {} 00:09:19.198 }, 00:09:19.198 { 00:09:19.198 "name": "Passthru0", 00:09:19.198 "aliases": [ 00:09:19.198 "147ae94d-49db-5ba0-af08-9be06ada7d50" 00:09:19.198 ], 00:09:19.198 "product_name": "passthru", 00:09:19.198 "block_size": 512, 00:09:19.198 "num_blocks": 16384, 00:09:19.198 "uuid": "147ae94d-49db-5ba0-af08-9be06ada7d50", 00:09:19.198 "assigned_rate_limits": { 00:09:19.198 "rw_ios_per_sec": 0, 00:09:19.198 "rw_mbytes_per_sec": 0, 00:09:19.198 "r_mbytes_per_sec": 0, 00:09:19.198 "w_mbytes_per_sec": 0 00:09:19.198 }, 00:09:19.198 "claimed": false, 00:09:19.198 "zoned": false, 00:09:19.198 "supported_io_types": { 00:09:19.198 "read": true, 00:09:19.198 "write": true, 00:09:19.198 "unmap": true, 00:09:19.198 "write_zeroes": true, 00:09:19.198 "flush": true, 00:09:19.198 "reset": true, 00:09:19.198 "compare": false, 00:09:19.198 "compare_and_write": false, 00:09:19.198 "abort": true, 00:09:19.198 "nvme_admin": false, 00:09:19.198 "nvme_io": false 00:09:19.198 }, 00:09:19.198 "memory_domains": [ 00:09:19.198 { 00:09:19.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.198 "dma_device_type": 2 00:09:19.198 } 00:09:19.198 ], 00:09:19.198 "driver_specific": { 00:09:19.198 "passthru": { 00:09:19.198 "name": "Passthru0", 00:09:19.198 "base_bdev_name": "Malloc0" 00:09:19.198 } 00:09:19.198 } 00:09:19.198 } 00:09:19.198 ]' 00:09:19.198 10:34:45 -- rpc/rpc.sh@21 -- # jq length 00:09:19.198 10:34:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:19.198 10:34:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:19.198 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.198 10:34:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:19.198 10:34:45 -- rpc/rpc.sh@26 -- # jq length 00:09:19.198 10:34:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:19.198 00:09:19.198 real 0m0.289s 00:09:19.198 user 0m0.188s 00:09:19.198 sys 0m0.030s 00:09:19.198 10:34:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.198 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.198 ************************************ 00:09:19.198 END TEST rpc_integrity 00:09:19.198 ************************************ 00:09:19.457 10:34:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:19.457 10:34:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:19.457 10:34:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.457 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 ************************************ 00:09:19.457 START TEST rpc_plugins 00:09:19.457 ************************************ 00:09:19.457 10:34:45 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:09:19.457 10:34:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:19.457 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.457 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.457 10:34:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:19.457 10:34:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:19.457 10:34:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.457 10:34:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 10:34:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.457 10:34:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:19.457 { 00:09:19.457 "name": "Malloc1", 00:09:19.457 "aliases": [ 00:09:19.457 "79a41421-28da-4068-8429-c9bf4248ec5e" 00:09:19.457 ], 00:09:19.457 "product_name": "Malloc disk", 00:09:19.457 "block_size": 4096, 00:09:19.457 "num_blocks": 256, 00:09:19.457 "uuid": "79a41421-28da-4068-8429-c9bf4248ec5e", 00:09:19.457 "assigned_rate_limits": { 00:09:19.457 "rw_ios_per_sec": 0, 00:09:19.457 "rw_mbytes_per_sec": 0, 00:09:19.457 "r_mbytes_per_sec": 0, 00:09:19.457 "w_mbytes_per_sec": 0 00:09:19.457 }, 00:09:19.457 "claimed": false, 00:09:19.457 "zoned": false, 00:09:19.457 "supported_io_types": { 00:09:19.457 "read": true, 00:09:19.457 "write": true, 00:09:19.457 "unmap": true, 00:09:19.457 "write_zeroes": true, 00:09:19.457 "flush": true, 00:09:19.457 "reset": true, 00:09:19.457 "compare": false, 00:09:19.457 "compare_and_write": false, 00:09:19.457 "abort": true, 00:09:19.457 "nvme_admin": false, 00:09:19.457 "nvme_io": false 00:09:19.457 }, 00:09:19.457 "memory_domains": [ 00:09:19.457 { 00:09:19.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.457 "dma_device_type": 2 00:09:19.457 } 00:09:19.457 ], 00:09:19.457 "driver_specific": {} 00:09:19.457 } 00:09:19.457 ]' 00:09:19.457 10:34:45 -- rpc/rpc.sh@32 -- # jq length 00:09:19.457 10:34:46 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:19.457 10:34:46 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:19.457 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.457 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.457 10:34:46 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:19.457 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.457 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.457 10:34:46 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:19.457 10:34:46 -- rpc/rpc.sh@36 -- # jq length 00:09:19.457 10:34:46 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:19.457 00:09:19.457 real 0m0.143s 00:09:19.457 user 0m0.108s 00:09:19.457 sys 0m0.001s 00:09:19.457 10:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.457 ************************************ 00:09:19.457 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 END TEST rpc_plugins 00:09:19.457 ************************************ 00:09:19.457 10:34:46 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:19.457 10:34:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:19.457 10:34:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.457 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.457 ************************************ 00:09:19.457 START TEST rpc_trace_cmd_test 00:09:19.457 ************************************ 00:09:19.457 10:34:46 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:09:19.457 10:34:46 -- rpc/rpc.sh@40 -- # local info 00:09:19.457 10:34:46 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:19.457 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.457 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.716 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.716 10:34:46 -- rpc/rpc.sh@42 -- # info='{ 00:09:19.716 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid115176", 00:09:19.716 "tpoint_group_mask": "0x8", 00:09:19.716 "iscsi_conn": { 00:09:19.716 "mask": "0x2", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "scsi": { 00:09:19.716 "mask": "0x4", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "bdev": { 00:09:19.716 "mask": "0x8", 00:09:19.716 "tpoint_mask": "0xffffffffffffffff" 00:09:19.716 }, 00:09:19.716 "nvmf_rdma": { 00:09:19.716 "mask": "0x10", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "nvmf_tcp": { 00:09:19.716 "mask": "0x20", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "ftl": { 00:09:19.716 "mask": "0x40", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "blobfs": { 00:09:19.716 "mask": "0x80", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "dsa": { 00:09:19.716 "mask": "0x200", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "thread": { 00:09:19.716 "mask": "0x400", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "nvme_pcie": { 00:09:19.716 "mask": "0x800", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "iaa": { 00:09:19.716 "mask": "0x1000", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "nvme_tcp": { 00:09:19.716 "mask": "0x2000", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 }, 00:09:19.716 "bdev_nvme": { 00:09:19.716 "mask": "0x4000", 00:09:19.716 "tpoint_mask": "0x0" 00:09:19.716 } 00:09:19.716 }' 00:09:19.716 10:34:46 -- rpc/rpc.sh@43 -- # jq length 00:09:19.716 10:34:46 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:09:19.716 10:34:46 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:19.716 10:34:46 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:19.716 10:34:46 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:19.716 10:34:46 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:19.716 10:34:46 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:19.716 10:34:46 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:19.716 10:34:46 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:19.975 10:34:46 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:19.975 00:09:19.975 real 0m0.268s 00:09:19.975 user 0m0.224s 00:09:19.975 sys 0m0.038s 00:09:19.975 10:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.975 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.975 ************************************ 00:09:19.975 END TEST rpc_trace_cmd_test 00:09:19.975 ************************************ 00:09:19.975 10:34:46 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:19.975 10:34:46 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:19.975 10:34:46 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:19.975 10:34:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:19.975 10:34:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:19.975 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.975 ************************************ 00:09:19.975 START TEST rpc_daemon_integrity 00:09:19.975 ************************************ 00:09:19.975 10:34:46 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:09:19.975 10:34:46 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:19.975 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.975 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.975 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.975 10:34:46 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:19.975 10:34:46 -- rpc/rpc.sh@13 -- # jq length 00:09:19.975 10:34:46 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:19.975 10:34:46 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:19.975 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.975 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.975 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.975 10:34:46 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:19.975 10:34:46 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:19.975 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.975 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.975 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.975 10:34:46 -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:19.975 { 00:09:19.975 "name": "Malloc2", 00:09:19.975 "aliases": [ 00:09:19.975 "f9f0b4c7-98fc-4289-8e8e-8a2e0388dde2" 00:09:19.975 ], 00:09:19.975 "product_name": "Malloc disk", 00:09:19.975 "block_size": 512, 00:09:19.975 "num_blocks": 16384, 00:09:19.975 "uuid": "f9f0b4c7-98fc-4289-8e8e-8a2e0388dde2", 00:09:19.975 "assigned_rate_limits": { 00:09:19.976 "rw_ios_per_sec": 0, 00:09:19.976 "rw_mbytes_per_sec": 0, 00:09:19.976 "r_mbytes_per_sec": 0, 00:09:19.976 "w_mbytes_per_sec": 0 00:09:19.976 }, 00:09:19.976 "claimed": false, 00:09:19.976 "zoned": false, 00:09:19.976 "supported_io_types": { 00:09:19.976 "read": true, 00:09:19.976 "write": true, 00:09:19.976 "unmap": true, 00:09:19.976 "write_zeroes": true, 00:09:19.976 "flush": true, 00:09:19.976 "reset": true, 00:09:19.976 "compare": false, 00:09:19.976 "compare_and_write": false, 00:09:19.976 "abort": true, 00:09:19.976 "nvme_admin": false, 00:09:19.976 "nvme_io": false 00:09:19.976 }, 00:09:19.976 "memory_domains": [ 00:09:19.976 { 00:09:19.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.976 "dma_device_type": 2 00:09:19.976 } 00:09:19.976 ], 00:09:19.976 "driver_specific": {} 00:09:19.976 } 00:09:19.976 ]' 00:09:19.976 10:34:46 -- rpc/rpc.sh@17 -- # jq length 00:09:19.976 10:34:46 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:19.976 10:34:46 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:19.976 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.976 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 [2024-07-24 10:34:46.609706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:19.976 [2024-07-24 10:34:46.609864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.976 [2024-07-24 10:34:46.609934] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:19.976 [2024-07-24 10:34:46.609962] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.976 [2024-07-24 10:34:46.612932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.976 [2024-07-24 10:34:46.613023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:19.976 Passthru0 00:09:19.976 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.976 10:34:46 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:19.976 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:19.976 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:19.976 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:19.976 10:34:46 -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:19.976 { 00:09:19.976 "name": "Malloc2", 00:09:19.976 "aliases": [ 00:09:19.976 "f9f0b4c7-98fc-4289-8e8e-8a2e0388dde2" 00:09:19.976 ], 00:09:19.976 "product_name": "Malloc disk", 00:09:19.976 "block_size": 512, 00:09:19.976 "num_blocks": 16384, 00:09:19.976 "uuid": "f9f0b4c7-98fc-4289-8e8e-8a2e0388dde2", 00:09:19.976 "assigned_rate_limits": { 00:09:19.976 "rw_ios_per_sec": 0, 00:09:19.976 "rw_mbytes_per_sec": 0, 00:09:19.976 "r_mbytes_per_sec": 0, 00:09:19.976 "w_mbytes_per_sec": 0 00:09:19.976 }, 00:09:19.976 "claimed": true, 00:09:19.976 "claim_type": "exclusive_write", 00:09:19.976 "zoned": false, 00:09:19.976 "supported_io_types": { 00:09:19.976 "read": true, 00:09:19.976 "write": true, 00:09:19.976 "unmap": true, 00:09:19.976 "write_zeroes": true, 00:09:19.976 "flush": true, 00:09:19.976 "reset": true, 00:09:19.976 "compare": false, 00:09:19.976 "compare_and_write": false, 00:09:19.976 "abort": true, 00:09:19.976 "nvme_admin": false, 00:09:19.976 "nvme_io": false 00:09:19.976 }, 00:09:19.976 "memory_domains": [ 00:09:19.976 { 00:09:19.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.976 "dma_device_type": 2 00:09:19.976 } 00:09:19.976 ], 00:09:19.976 "driver_specific": {} 00:09:19.976 }, 00:09:19.976 { 00:09:19.976 "name": "Passthru0", 00:09:19.976 "aliases": [ 00:09:19.976 "74a8a6a4-37c6-51bb-a73c-c6c8b2cf5dc8" 00:09:19.976 ], 00:09:19.976 "product_name": "passthru", 00:09:19.976 "block_size": 512, 00:09:19.976 "num_blocks": 16384, 00:09:19.976 "uuid": "74a8a6a4-37c6-51bb-a73c-c6c8b2cf5dc8", 00:09:19.976 "assigned_rate_limits": { 00:09:19.976 "rw_ios_per_sec": 0, 00:09:19.976 "rw_mbytes_per_sec": 0, 00:09:19.976 "r_mbytes_per_sec": 0, 00:09:19.976 "w_mbytes_per_sec": 0 00:09:19.976 }, 00:09:19.976 "claimed": false, 00:09:19.976 "zoned": false, 00:09:19.976 "supported_io_types": { 00:09:19.976 "read": true, 00:09:19.976 "write": true, 00:09:19.976 "unmap": true, 00:09:19.976 "write_zeroes": true, 00:09:19.976 "flush": true, 00:09:19.976 "reset": true, 00:09:19.976 "compare": false, 00:09:19.976 "compare_and_write": false, 00:09:19.976 "abort": true, 00:09:19.976 "nvme_admin": false, 00:09:19.976 "nvme_io": false 00:09:19.976 }, 00:09:19.976 "memory_domains": [ 00:09:19.976 { 00:09:19.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.976 "dma_device_type": 2 00:09:19.976 } 00:09:19.976 ], 00:09:19.976 "driver_specific": { 00:09:19.976 "passthru": { 00:09:19.976 "name": "Passthru0", 00:09:19.976 "base_bdev_name": "Malloc2" 00:09:19.976 } 00:09:19.976 } 00:09:19.976 } 00:09:19.976 ]' 00:09:19.976 10:34:46 -- rpc/rpc.sh@21 -- # jq length 00:09:20.235 10:34:46 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:20.235 10:34:46 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:20.235 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.235 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:20.235 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.235 10:34:46 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:20.235 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.235 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:20.235 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.235 10:34:46 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:20.235 10:34:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.235 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:20.235 10:34:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.235 10:34:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:20.235 10:34:46 -- rpc/rpc.sh@26 -- # jq length 00:09:20.235 10:34:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:20.235 00:09:20.235 real 0m0.313s 00:09:20.235 user 0m0.213s 00:09:20.235 sys 0m0.030s 00:09:20.235 10:34:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.235 10:34:46 -- common/autotest_common.sh@10 -- # set +x 00:09:20.235 ************************************ 00:09:20.235 END TEST rpc_daemon_integrity 00:09:20.235 ************************************ 00:09:20.235 10:34:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:20.235 10:34:46 -- rpc/rpc.sh@84 -- # killprocess 115176 00:09:20.235 10:34:46 -- common/autotest_common.sh@926 -- # '[' -z 115176 ']' 00:09:20.235 10:34:46 -- common/autotest_common.sh@930 -- # kill -0 115176 00:09:20.235 10:34:46 -- common/autotest_common.sh@931 -- # uname 00:09:20.235 10:34:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:20.235 10:34:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115176 00:09:20.235 10:34:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:20.235 10:34:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:20.235 killing process with pid 115176 00:09:20.235 10:34:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115176' 00:09:20.235 10:34:46 -- common/autotest_common.sh@945 -- # kill 115176 00:09:20.235 10:34:46 -- common/autotest_common.sh@950 -- # wait 115176 00:09:20.802 00:09:20.802 real 0m2.928s 00:09:20.802 user 0m3.624s 00:09:20.802 sys 0m0.745s 00:09:20.802 10:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.802 ************************************ 00:09:20.802 END TEST rpc 00:09:20.802 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:20.802 ************************************ 00:09:20.802 10:34:47 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:20.802 10:34:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:20.802 10:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.802 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:20.802 ************************************ 00:09:20.802 START TEST rpc_client 00:09:20.802 ************************************ 00:09:20.802 10:34:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:21.060 * Looking for test storage... 00:09:21.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:21.060 10:34:47 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:21.060 OK 00:09:21.060 10:34:47 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:21.060 00:09:21.060 real 0m0.118s 00:09:21.060 user 0m0.075s 00:09:21.060 sys 0m0.058s 00:09:21.060 10:34:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:21.060 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:21.060 ************************************ 00:09:21.060 END TEST rpc_client 00:09:21.060 ************************************ 00:09:21.060 10:34:47 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:21.060 10:34:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:21.060 10:34:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:21.060 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:21.060 ************************************ 00:09:21.060 START TEST json_config 00:09:21.060 ************************************ 00:09:21.060 10:34:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:21.060 10:34:47 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.060 10:34:47 -- nvmf/common.sh@7 -- # uname -s 00:09:21.060 10:34:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.060 10:34:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.060 10:34:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.060 10:34:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.060 10:34:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.060 10:34:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.060 10:34:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.060 10:34:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.060 10:34:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.060 10:34:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.060 10:34:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a3cb0acf-e372-48c7-90ef-85060e57ba9e 00:09:21.060 10:34:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=a3cb0acf-e372-48c7-90ef-85060e57ba9e 00:09:21.060 10:34:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.060 10:34:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.060 10:34:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:21.060 10:34:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.060 10:34:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.060 10:34:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.060 10:34:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.061 10:34:47 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.061 10:34:47 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.061 10:34:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.061 10:34:47 -- paths/export.sh@5 -- # export PATH 00:09:21.061 10:34:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:21.061 10:34:47 -- nvmf/common.sh@46 -- # : 0 00:09:21.061 10:34:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:21.061 10:34:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:21.061 10:34:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:21.061 10:34:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.061 10:34:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.061 10:34:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:21.061 10:34:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:21.061 10:34:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:21.061 10:34:47 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:21.061 10:34:47 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:09:21.061 10:34:47 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:09:21.061 10:34:47 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:21.061 10:34:47 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:09:21.061 10:34:47 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:21.061 10:34:47 -- json_config/json_config.sh@32 -- # declare -A app_params 00:09:21.061 10:34:47 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:21.061 10:34:47 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:09:21.061 10:34:47 -- json_config/json_config.sh@43 -- # last_event_id=0 00:09:21.061 10:34:47 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:21.061 INFO: JSON configuration test init 00:09:21.061 10:34:47 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:09:21.061 10:34:47 -- json_config/json_config.sh@420 -- # json_config_test_init 00:09:21.061 10:34:47 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:09:21.061 10:34:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:21.061 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:21.061 10:34:47 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:09:21.061 10:34:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:21.061 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:21.061 10:34:47 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:09:21.061 10:34:47 -- json_config/json_config.sh@98 -- # local app=target 00:09:21.061 10:34:47 -- json_config/json_config.sh@99 -- # shift 00:09:21.061 10:34:47 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:21.061 10:34:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:21.061 10:34:47 -- json_config/json_config.sh@111 -- # app_pid[$app]=115453 00:09:21.061 Waiting for target to run... 00:09:21.061 10:34:47 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:21.061 10:34:47 -- json_config/json_config.sh@114 -- # waitforlisten 115453 /var/tmp/spdk_tgt.sock 00:09:21.061 10:34:47 -- common/autotest_common.sh@819 -- # '[' -z 115453 ']' 00:09:21.061 10:34:47 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:21.061 10:34:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:21.061 10:34:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:21.061 10:34:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:21.061 10:34:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.061 10:34:47 -- common/autotest_common.sh@10 -- # set +x 00:09:21.319 [2024-07-24 10:34:47.780211] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:21.319 [2024-07-24 10:34:47.780412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115453 ] 00:09:21.577 [2024-07-24 10:34:48.202908] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.835 [2024-07-24 10:34:48.287358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:21.835 [2024-07-24 10:34:48.287710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.093 10:34:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:22.093 00:09:22.093 10:34:48 -- common/autotest_common.sh@852 -- # return 0 00:09:22.093 10:34:48 -- json_config/json_config.sh@115 -- # echo '' 00:09:22.093 10:34:48 -- json_config/json_config.sh@322 -- # create_accel_config 00:09:22.093 10:34:48 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:09:22.093 10:34:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:22.093 10:34:48 -- common/autotest_common.sh@10 -- # set +x 00:09:22.093 10:34:48 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:09:22.093 10:34:48 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:09:22.093 10:34:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:22.093 10:34:48 -- common/autotest_common.sh@10 -- # set +x 00:09:22.351 10:34:48 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:22.351 10:34:48 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:09:22.351 10:34:48 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:22.609 10:34:49 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:09:22.609 10:34:49 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:09:22.609 10:34:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:22.609 10:34:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.609 10:34:49 -- json_config/json_config.sh@48 -- # local ret=0 00:09:22.609 10:34:49 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:22.609 10:34:49 -- json_config/json_config.sh@49 -- # local enabled_types 00:09:22.609 10:34:49 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:22.609 10:34:49 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:22.609 10:34:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:22.867 10:34:49 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:22.867 10:34:49 -- json_config/json_config.sh@51 -- # local get_types 00:09:22.867 10:34:49 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:22.867 10:34:49 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:09:22.867 10:34:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:22.867 10:34:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.867 10:34:49 -- json_config/json_config.sh@58 -- # return 0 00:09:22.867 10:34:49 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:09:22.867 10:34:49 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:09:22.867 10:34:49 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:09:22.867 10:34:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:22.867 10:34:49 -- common/autotest_common.sh@10 -- # set +x 00:09:22.867 10:34:49 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:09:22.867 10:34:49 -- json_config/json_config.sh@160 -- # local expected_notifications 00:09:22.867 10:34:49 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:09:22.867 10:34:49 -- json_config/json_config.sh@164 -- # get_notifications 00:09:22.867 10:34:49 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:22.867 10:34:49 -- json_config/json_config.sh@64 -- # IFS=: 00:09:22.867 10:34:49 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:22.867 10:34:49 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:22.867 10:34:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:22.867 10:34:49 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:23.125 10:34:49 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:23.125 10:34:49 -- json_config/json_config.sh@64 -- # IFS=: 00:09:23.125 10:34:49 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:23.125 10:34:49 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:09:23.125 10:34:49 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:09:23.125 10:34:49 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:23.125 10:34:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:23.383 Nvme0n1p0 Nvme0n1p1 00:09:23.383 10:34:49 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:23.383 10:34:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:23.642 [2024-07-24 10:34:50.198336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:23.642 [2024-07-24 10:34:50.198494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:23.642 00:09:23.642 10:34:50 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:23.642 10:34:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:23.900 Malloc3 00:09:23.900 10:34:50 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:23.900 10:34:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:24.159 [2024-07-24 10:34:50.722582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:24.159 [2024-07-24 10:34:50.722771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.159 [2024-07-24 10:34:50.722837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:24.159 [2024-07-24 10:34:50.722871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.159 [2024-07-24 10:34:50.725858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.159 [2024-07-24 10:34:50.725945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:24.159 PTBdevFromMalloc3 00:09:24.159 10:34:50 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:24.159 10:34:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:24.417 Null0 00:09:24.417 10:34:50 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:24.417 10:34:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:24.676 Malloc0 00:09:24.676 10:34:51 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:24.676 10:34:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:24.934 Malloc1 00:09:24.934 10:34:51 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:24.934 10:34:51 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:25.193 102400+0 records in 00:09:25.193 102400+0 records out 00:09:25.193 104857600 bytes (105 MB, 100 MiB) copied, 0.296816 s, 353 MB/s 00:09:25.193 10:34:51 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:25.193 10:34:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:25.451 aio_disk 00:09:25.451 10:34:51 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:25.451 10:34:51 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:25.451 10:34:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:25.710 b7bcba5b-ea3b-4d45-bdb5-6e9e109ba810 00:09:25.710 10:34:52 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:25.710 10:34:52 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:25.710 10:34:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:25.970 10:34:52 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:25.970 10:34:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:26.241 10:34:52 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:26.241 10:34:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:26.499 10:34:52 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:26.500 10:34:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:26.758 10:34:53 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:09:26.758 10:34:53 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:09:26.758 10:34:53 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d05bb380-5797-4077-835c-19c2ccb7d988 bdev_register:5e4106c0-9402-4279-ad4f-46b2f5d6fd16 bdev_register:d753e7df-a32a-4484-b0aa-c8f45a5ea10c bdev_register:3b52ac00-15ca-40ec-b2aa-093b1b7ed358 00:09:26.758 10:34:53 -- json_config/json_config.sh@70 -- # local events_to_check 00:09:26.758 10:34:53 -- json_config/json_config.sh@71 -- # local recorded_events 00:09:26.758 10:34:53 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:26.758 10:34:53 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:d05bb380-5797-4077-835c-19c2ccb7d988 bdev_register:5e4106c0-9402-4279-ad4f-46b2f5d6fd16 bdev_register:d753e7df-a32a-4484-b0aa-c8f45a5ea10c bdev_register:3b52ac00-15ca-40ec-b2aa-093b1b7ed358 00:09:26.758 10:34:53 -- json_config/json_config.sh@74 -- # sort 00:09:26.758 10:34:53 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:09:26.758 10:34:53 -- json_config/json_config.sh@75 -- # get_notifications 00:09:26.758 10:34:53 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:09:26.758 10:34:53 -- json_config/json_config.sh@75 -- # sort 00:09:26.758 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:26.758 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:26.758 10:34:53 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:09:26.758 10:34:53 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:26.758 10:34:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:d05bb380-5797-4077-835c-19c2ccb7d988 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:5e4106c0-9402-4279-ad4f-46b2f5d6fd16 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:d753e7df-a32a-4484-b0aa-c8f45a5ea10c 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.017 10:34:53 -- json_config/json_config.sh@65 -- # echo bdev_register:3b52ac00-15ca-40ec-b2aa-093b1b7ed358 00:09:27.017 10:34:53 -- json_config/json_config.sh@64 -- # IFS=: 00:09:27.018 10:34:53 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:09:27.018 10:34:53 -- json_config/json_config.sh@77 -- # [[ bdev_register:3b52ac00-15ca-40ec-b2aa-093b1b7ed358 bdev_register:5e4106c0-9402-4279-ad4f-46b2f5d6fd16 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d05bb380-5797-4077-835c-19c2ccb7d988 bdev_register:d753e7df-a32a-4484-b0aa-c8f45a5ea10c != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\b\5\2\a\c\0\0\-\1\5\c\a\-\4\0\e\c\-\b\2\a\a\-\0\9\3\b\1\b\7\e\d\3\5\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\e\4\1\0\6\c\0\-\9\4\0\2\-\4\2\7\9\-\a\d\4\f\-\4\6\b\2\f\5\d\6\f\d\1\6\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\0\5\b\b\3\8\0\-\5\7\9\7\-\4\0\7\7\-\8\3\5\c\-\1\9\c\2\c\c\b\7\d\9\8\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\7\5\3\e\7\d\f\-\a\3\2\a\-\4\4\8\4\-\b\0\a\a\-\c\8\f\4\5\a\5\e\a\1\0\c ]] 00:09:27.018 10:34:53 -- json_config/json_config.sh@89 -- # cat 00:09:27.018 10:34:53 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:3b52ac00-15ca-40ec-b2aa-093b1b7ed358 bdev_register:5e4106c0-9402-4279-ad4f-46b2f5d6fd16 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:d05bb380-5797-4077-835c-19c2ccb7d988 bdev_register:d753e7df-a32a-4484-b0aa-c8f45a5ea10c 00:09:27.018 Expected events matched: 00:09:27.018 bdev_register:3b52ac00-15ca-40ec-b2aa-093b1b7ed358 00:09:27.018 bdev_register:5e4106c0-9402-4279-ad4f-46b2f5d6fd16 00:09:27.018 bdev_register:Malloc0 00:09:27.018 bdev_register:Malloc0p0 00:09:27.018 bdev_register:Malloc0p1 00:09:27.018 bdev_register:Malloc0p2 00:09:27.018 bdev_register:Malloc1 00:09:27.018 bdev_register:Malloc3 00:09:27.018 bdev_register:Null0 00:09:27.018 bdev_register:Nvme0n1 00:09:27.018 bdev_register:Nvme0n1p0 00:09:27.018 bdev_register:Nvme0n1p1 00:09:27.018 bdev_register:PTBdevFromMalloc3 00:09:27.018 bdev_register:aio_disk 00:09:27.018 bdev_register:d05bb380-5797-4077-835c-19c2ccb7d988 00:09:27.018 bdev_register:d753e7df-a32a-4484-b0aa-c8f45a5ea10c 00:09:27.018 10:34:53 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:09:27.018 10:34:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:27.018 10:34:53 -- common/autotest_common.sh@10 -- # set +x 00:09:27.018 10:34:53 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:09:27.018 10:34:53 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:09:27.018 10:34:53 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:09:27.018 10:34:53 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:09:27.018 10:34:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:27.018 10:34:53 -- common/autotest_common.sh@10 -- # set +x 00:09:27.018 10:34:53 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:09:27.018 10:34:53 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:27.018 10:34:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:27.276 MallocBdevForConfigChangeCheck 00:09:27.276 10:34:53 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:09:27.276 10:34:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:27.276 10:34:53 -- common/autotest_common.sh@10 -- # set +x 00:09:27.276 10:34:53 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:09:27.276 10:34:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:27.843 INFO: shutting down applications... 00:09:27.843 10:34:54 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:09:27.843 10:34:54 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:09:27.843 10:34:54 -- json_config/json_config.sh@431 -- # json_config_clear target 00:09:27.843 10:34:54 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:09:27.843 10:34:54 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:27.843 [2024-07-24 10:34:54.482131] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:28.101 Calling clear_vhost_scsi_subsystem 00:09:28.101 Calling clear_iscsi_subsystem 00:09:28.101 Calling clear_vhost_blk_subsystem 00:09:28.101 Calling clear_nbd_subsystem 00:09:28.101 Calling clear_nvmf_subsystem 00:09:28.101 Calling clear_bdev_subsystem 00:09:28.101 Calling clear_accel_subsystem 00:09:28.101 Calling clear_iobuf_subsystem 00:09:28.101 Calling clear_sock_subsystem 00:09:28.101 Calling clear_vmd_subsystem 00:09:28.101 Calling clear_scheduler_subsystem 00:09:28.101 10:34:54 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:28.101 10:34:54 -- json_config/json_config.sh@396 -- # count=100 00:09:28.101 10:34:54 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:09:28.101 10:34:54 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:28.101 10:34:54 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:28.101 10:34:54 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:28.360 10:34:55 -- json_config/json_config.sh@398 -- # break 00:09:28.360 10:34:55 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:09:28.360 10:34:55 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:09:28.360 10:34:55 -- json_config/json_config.sh@120 -- # local app=target 00:09:28.360 10:34:55 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:09:28.360 10:34:55 -- json_config/json_config.sh@124 -- # [[ -n 115453 ]] 00:09:28.360 10:34:55 -- json_config/json_config.sh@127 -- # kill -SIGINT 115453 00:09:28.360 10:34:55 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:09:28.360 10:34:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:28.360 10:34:55 -- json_config/json_config.sh@130 -- # kill -0 115453 00:09:28.360 10:34:55 -- json_config/json_config.sh@134 -- # sleep 0.5 00:09:28.926 10:34:55 -- json_config/json_config.sh@129 -- # (( i++ )) 00:09:28.926 10:34:55 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:09:28.926 10:34:55 -- json_config/json_config.sh@130 -- # kill -0 115453 00:09:28.926 10:34:55 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:09:28.926 10:34:55 -- json_config/json_config.sh@132 -- # break 00:09:28.926 SPDK target shutdown done 00:09:28.926 10:34:55 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:09:28.926 10:34:55 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:09:28.926 INFO: relaunching applications... 00:09:28.926 10:34:55 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:09:28.926 10:34:55 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.926 10:34:55 -- json_config/json_config.sh@98 -- # local app=target 00:09:28.926 10:34:55 -- json_config/json_config.sh@99 -- # shift 00:09:28.926 10:34:55 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:09:28.926 10:34:55 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:09:28.926 10:34:55 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:09:28.926 10:34:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:28.926 10:34:55 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:09:28.926 10:34:55 -- json_config/json_config.sh@111 -- # app_pid[$app]=115702 00:09:28.926 Waiting for target to run... 00:09:28.926 10:34:55 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:09:28.926 10:34:55 -- json_config/json_config.sh@114 -- # waitforlisten 115702 /var/tmp/spdk_tgt.sock 00:09:28.926 10:34:55 -- common/autotest_common.sh@819 -- # '[' -z 115702 ']' 00:09:28.926 10:34:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:28.926 10:34:55 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.926 10:34:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:28.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:28.926 10:34:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:28.926 10:34:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:28.926 10:34:55 -- common/autotest_common.sh@10 -- # set +x 00:09:28.926 [2024-07-24 10:34:55.601033] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:28.926 [2024-07-24 10:34:55.601332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115702 ] 00:09:29.526 [2024-07-24 10:34:56.130312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.801 [2024-07-24 10:34:56.218479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:29.801 [2024-07-24 10:34:56.218817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.801 [2024-07-24 10:34:56.377616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:29.801 [2024-07-24 10:34:56.377798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:29.801 [2024-07-24 10:34:56.385535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:29.801 [2024-07-24 10:34:56.385632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:29.801 [2024-07-24 10:34:56.393571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:29.801 [2024-07-24 10:34:56.393688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:29.801 [2024-07-24 10:34:56.393740] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:29.801 [2024-07-24 10:34:56.481246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:29.801 [2024-07-24 10:34:56.481447] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.801 [2024-07-24 10:34:56.481493] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:29.801 [2024-07-24 10:34:56.481531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.801 [2024-07-24 10:34:56.482256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.801 [2024-07-24 10:34:56.482318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:30.736 10:34:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.736 00:09:30.736 10:34:57 -- common/autotest_common.sh@852 -- # return 0 00:09:30.736 10:34:57 -- json_config/json_config.sh@115 -- # echo '' 00:09:30.736 10:34:57 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:09:30.736 INFO: Checking if target configuration is the same... 00:09:30.736 10:34:57 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:30.736 10:34:57 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:30.736 10:34:57 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:09:30.736 10:34:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:30.736 + '[' 2 -ne 2 ']' 00:09:30.736 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:30.736 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:30.736 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.736 +++ basename /dev/fd/62 00:09:30.736 ++ mktemp /tmp/62.XXX 00:09:30.736 + tmp_file_1=/tmp/62.g7F 00:09:30.736 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:30.736 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:30.736 + tmp_file_2=/tmp/spdk_tgt_config.json.V1u 00:09:30.736 + ret=0 00:09:30.736 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:30.995 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:30.995 + diff -u /tmp/62.g7F /tmp/spdk_tgt_config.json.V1u 00:09:30.995 + echo 'INFO: JSON config files are the same' 00:09:30.995 INFO: JSON config files are the same 00:09:30.995 + rm /tmp/62.g7F /tmp/spdk_tgt_config.json.V1u 00:09:30.995 + exit 0 00:09:30.995 10:34:57 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:09:30.995 INFO: changing configuration and checking if this can be detected... 00:09:30.995 10:34:57 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:30.995 10:34:57 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:30.995 10:34:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:31.253 10:34:57 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:31.253 10:34:57 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:09:31.253 10:34:57 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:31.253 + '[' 2 -ne 2 ']' 00:09:31.253 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:31.253 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:31.253 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:31.253 +++ basename /dev/fd/62 00:09:31.253 ++ mktemp /tmp/62.XXX 00:09:31.253 + tmp_file_1=/tmp/62.yCb 00:09:31.253 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:31.253 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:31.253 + tmp_file_2=/tmp/spdk_tgt_config.json.Fzk 00:09:31.253 + ret=0 00:09:31.253 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:31.820 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:31.820 + diff -u /tmp/62.yCb /tmp/spdk_tgt_config.json.Fzk 00:09:31.820 + ret=1 00:09:31.820 + echo '=== Start of file: /tmp/62.yCb ===' 00:09:31.820 + cat /tmp/62.yCb 00:09:31.820 + echo '=== End of file: /tmp/62.yCb ===' 00:09:31.820 + echo '' 00:09:31.820 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Fzk ===' 00:09:31.820 + cat /tmp/spdk_tgt_config.json.Fzk 00:09:31.821 + echo '=== End of file: /tmp/spdk_tgt_config.json.Fzk ===' 00:09:31.821 + echo '' 00:09:31.821 + rm /tmp/62.yCb /tmp/spdk_tgt_config.json.Fzk 00:09:31.821 + exit 1 00:09:31.821 INFO: configuration change detected. 00:09:31.821 10:34:58 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:09:31.821 10:34:58 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:09:31.821 10:34:58 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:09:31.821 10:34:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:31.821 10:34:58 -- common/autotest_common.sh@10 -- # set +x 00:09:31.821 10:34:58 -- json_config/json_config.sh@360 -- # local ret=0 00:09:31.821 10:34:58 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:09:31.821 10:34:58 -- json_config/json_config.sh@370 -- # [[ -n 115702 ]] 00:09:31.821 10:34:58 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:09:31.821 10:34:58 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:09:31.821 10:34:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:31.821 10:34:58 -- common/autotest_common.sh@10 -- # set +x 00:09:31.821 10:34:58 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:09:31.821 10:34:58 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:31.821 10:34:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:32.079 10:34:58 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:32.080 10:34:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:32.338 10:34:58 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:32.338 10:34:58 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:32.596 10:34:59 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:32.596 10:34:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:32.867 10:34:59 -- json_config/json_config.sh@246 -- # uname -s 00:09:32.867 10:34:59 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:09:32.867 10:34:59 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:09:32.867 10:34:59 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:09:32.867 10:34:59 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:09:32.867 10:34:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:32.867 10:34:59 -- common/autotest_common.sh@10 -- # set +x 00:09:32.867 10:34:59 -- json_config/json_config.sh@376 -- # killprocess 115702 00:09:32.867 10:34:59 -- common/autotest_common.sh@926 -- # '[' -z 115702 ']' 00:09:32.867 10:34:59 -- common/autotest_common.sh@930 -- # kill -0 115702 00:09:32.867 10:34:59 -- common/autotest_common.sh@931 -- # uname 00:09:32.867 10:34:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:32.867 10:34:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115702 00:09:32.867 10:34:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:32.867 10:34:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:32.867 10:34:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115702' 00:09:32.867 killing process with pid 115702 00:09:32.867 10:34:59 -- common/autotest_common.sh@945 -- # kill 115702 00:09:32.867 10:34:59 -- common/autotest_common.sh@950 -- # wait 115702 00:09:33.448 10:34:59 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:33.448 10:34:59 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:09:33.448 10:34:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:33.448 10:34:59 -- common/autotest_common.sh@10 -- # set +x 00:09:33.448 10:34:59 -- json_config/json_config.sh@381 -- # return 0 00:09:33.448 10:34:59 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:09:33.448 INFO: Success 00:09:33.448 00:09:33.448 real 0m12.246s 00:09:33.448 user 0m18.516s 00:09:33.448 sys 0m2.528s 00:09:33.448 10:34:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.448 10:34:59 -- common/autotest_common.sh@10 -- # set +x 00:09:33.448 ************************************ 00:09:33.448 END TEST json_config 00:09:33.448 ************************************ 00:09:33.448 10:34:59 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:33.448 10:34:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:33.448 10:34:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:33.448 10:34:59 -- common/autotest_common.sh@10 -- # set +x 00:09:33.448 ************************************ 00:09:33.448 START TEST json_config_extra_key 00:09:33.448 ************************************ 00:09:33.448 10:34:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:33.448 10:34:59 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:33.449 10:34:59 -- nvmf/common.sh@7 -- # uname -s 00:09:33.449 10:34:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.449 10:34:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.449 10:34:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.449 10:34:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.449 10:34:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.449 10:34:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.449 10:34:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.449 10:34:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.449 10:34:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.449 10:34:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.449 10:34:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:bc1a1079-6f79-4a12-9e0b-8b4b983783c6 00:09:33.449 10:34:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=bc1a1079-6f79-4a12-9e0b-8b4b983783c6 00:09:33.449 10:34:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.449 10:34:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.449 10:34:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:33.449 10:34:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.449 10:34:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.449 10:34:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.449 10:34:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.449 10:34:59 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:33.449 10:34:59 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:33.449 10:34:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:33.449 10:34:59 -- paths/export.sh@5 -- # export PATH 00:09:33.449 10:34:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:33.449 10:34:59 -- nvmf/common.sh@46 -- # : 0 00:09:33.449 10:34:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:33.449 10:34:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:33.449 10:34:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:33.449 10:34:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.449 10:34:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.449 10:34:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:33.449 10:34:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:33.449 10:34:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:33.449 INFO: launching applications... 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@25 -- # shift 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=115879 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:33.449 Waiting for target to run... 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:09:33.449 10:34:59 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 115879 /var/tmp/spdk_tgt.sock 00:09:33.449 10:34:59 -- common/autotest_common.sh@819 -- # '[' -z 115879 ']' 00:09:33.449 10:34:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:33.449 10:34:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:33.449 10:34:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:33.449 10:34:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.449 10:34:59 -- common/autotest_common.sh@10 -- # set +x 00:09:33.449 [2024-07-24 10:35:00.077100] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:33.449 [2024-07-24 10:35:00.078075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115879 ] 00:09:34.016 [2024-07-24 10:35:00.644962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.274 [2024-07-24 10:35:00.739719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:34.274 [2024-07-24 10:35:00.740276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.532 10:35:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.532 10:35:01 -- common/autotest_common.sh@852 -- # return 0 00:09:34.532 00:09:34.532 INFO: shutting down applications... 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 115879 ]] 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 115879 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115879 00:09:34.532 10:35:01 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:35.098 10:35:01 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:35.098 10:35:01 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:35.098 10:35:01 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115879 00:09:35.098 10:35:01 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 115879 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@52 -- # break 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:09:35.665 SPDK target shutdown done 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:09:35.665 Success 00:09:35.665 10:35:02 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:09:35.665 00:09:35.665 real 0m2.116s 00:09:35.665 user 0m1.669s 00:09:35.665 sys 0m0.553s 00:09:35.665 10:35:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.665 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:09:35.665 ************************************ 00:09:35.665 END TEST json_config_extra_key 00:09:35.665 ************************************ 00:09:35.665 10:35:02 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:35.665 10:35:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:35.665 10:35:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:35.665 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:09:35.665 ************************************ 00:09:35.665 START TEST alias_rpc 00:09:35.665 ************************************ 00:09:35.665 10:35:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:35.665 * Looking for test storage... 00:09:35.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:35.665 10:35:02 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:35.665 10:35:02 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=115959 00:09:35.665 10:35:02 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:35.665 10:35:02 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 115959 00:09:35.665 10:35:02 -- common/autotest_common.sh@819 -- # '[' -z 115959 ']' 00:09:35.665 10:35:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.665 10:35:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:35.665 10:35:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.665 10:35:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:35.665 10:35:02 -- common/autotest_common.sh@10 -- # set +x 00:09:35.665 [2024-07-24 10:35:02.243608] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:35.665 [2024-07-24 10:35:02.243906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115959 ] 00:09:35.924 [2024-07-24 10:35:02.390126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.924 [2024-07-24 10:35:02.497277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.924 [2024-07-24 10:35:02.497583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.491 10:35:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.491 10:35:03 -- common/autotest_common.sh@852 -- # return 0 00:09:36.491 10:35:03 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:36.750 10:35:03 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 115959 00:09:36.750 10:35:03 -- common/autotest_common.sh@926 -- # '[' -z 115959 ']' 00:09:36.750 10:35:03 -- common/autotest_common.sh@930 -- # kill -0 115959 00:09:36.750 10:35:03 -- common/autotest_common.sh@931 -- # uname 00:09:36.750 10:35:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:36.750 10:35:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115959 00:09:37.008 killing process with pid 115959 00:09:37.008 10:35:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:37.008 10:35:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:37.008 10:35:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115959' 00:09:37.008 10:35:03 -- common/autotest_common.sh@945 -- # kill 115959 00:09:37.008 10:35:03 -- common/autotest_common.sh@950 -- # wait 115959 00:09:37.573 00:09:37.573 real 0m1.923s 00:09:37.573 user 0m1.979s 00:09:37.573 sys 0m0.566s 00:09:37.573 10:35:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:37.573 ************************************ 00:09:37.573 END TEST alias_rpc 00:09:37.573 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.573 ************************************ 00:09:37.573 10:35:04 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:09:37.573 10:35:04 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:37.573 10:35:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:37.573 10:35:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:37.573 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.573 ************************************ 00:09:37.573 START TEST spdkcli_tcp 00:09:37.573 ************************************ 00:09:37.573 10:35:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:37.573 * Looking for test storage... 00:09:37.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:37.573 10:35:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:37.573 10:35:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:37.573 10:35:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:37.573 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=116046 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:37.573 10:35:04 -- spdkcli/tcp.sh@27 -- # waitforlisten 116046 00:09:37.573 10:35:04 -- common/autotest_common.sh@819 -- # '[' -z 116046 ']' 00:09:37.573 10:35:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.573 10:35:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:37.573 10:35:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.573 10:35:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:37.573 10:35:04 -- common/autotest_common.sh@10 -- # set +x 00:09:37.573 [2024-07-24 10:35:04.225503] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:37.573 [2024-07-24 10:35:04.225758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116046 ] 00:09:37.832 [2024-07-24 10:35:04.372532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:37.832 [2024-07-24 10:35:04.497638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:37.832 [2024-07-24 10:35:04.498071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.832 [2024-07-24 10:35:04.498083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.767 10:35:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.767 10:35:05 -- common/autotest_common.sh@852 -- # return 0 00:09:38.767 10:35:05 -- spdkcli/tcp.sh@31 -- # socat_pid=116068 00:09:38.767 10:35:05 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:38.767 10:35:05 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:38.767 [ 00:09:38.767 "spdk_get_version", 00:09:38.767 "rpc_get_methods", 00:09:38.767 "trace_get_info", 00:09:38.767 "trace_get_tpoint_group_mask", 00:09:38.767 "trace_disable_tpoint_group", 00:09:38.767 "trace_enable_tpoint_group", 00:09:38.767 "trace_clear_tpoint_mask", 00:09:38.767 "trace_set_tpoint_mask", 00:09:38.767 "framework_get_pci_devices", 00:09:38.767 "framework_get_config", 00:09:38.767 "framework_get_subsystems", 00:09:38.767 "iobuf_get_stats", 00:09:38.767 "iobuf_set_options", 00:09:38.767 "sock_set_default_impl", 00:09:38.767 "sock_impl_set_options", 00:09:38.767 "sock_impl_get_options", 00:09:38.767 "vmd_rescan", 00:09:38.768 "vmd_remove_device", 00:09:38.768 "vmd_enable", 00:09:38.768 "accel_get_stats", 00:09:38.768 "accel_set_options", 00:09:38.768 "accel_set_driver", 00:09:38.768 "accel_crypto_key_destroy", 00:09:38.768 "accel_crypto_keys_get", 00:09:38.768 "accel_crypto_key_create", 00:09:38.768 "accel_assign_opc", 00:09:38.768 "accel_get_module_info", 00:09:38.768 "accel_get_opc_assignments", 00:09:38.768 "notify_get_notifications", 00:09:38.768 "notify_get_types", 00:09:38.768 "bdev_get_histogram", 00:09:38.768 "bdev_enable_histogram", 00:09:38.768 "bdev_set_qos_limit", 00:09:38.768 "bdev_set_qd_sampling_period", 00:09:38.768 "bdev_get_bdevs", 00:09:38.768 "bdev_reset_iostat", 00:09:38.768 "bdev_get_iostat", 00:09:38.768 "bdev_examine", 00:09:38.768 "bdev_wait_for_examine", 00:09:38.768 "bdev_set_options", 00:09:38.768 "scsi_get_devices", 00:09:38.768 "thread_set_cpumask", 00:09:38.768 "framework_get_scheduler", 00:09:38.768 "framework_set_scheduler", 00:09:38.768 "framework_get_reactors", 00:09:38.768 "thread_get_io_channels", 00:09:38.768 "thread_get_pollers", 00:09:38.768 "thread_get_stats", 00:09:38.768 "framework_monitor_context_switch", 00:09:38.768 "spdk_kill_instance", 00:09:38.768 "log_enable_timestamps", 00:09:38.768 "log_get_flags", 00:09:38.768 "log_clear_flag", 00:09:38.768 "log_set_flag", 00:09:38.768 "log_get_level", 00:09:38.768 "log_set_level", 00:09:38.768 "log_get_print_level", 00:09:38.768 "log_set_print_level", 00:09:38.768 "framework_enable_cpumask_locks", 00:09:38.768 "framework_disable_cpumask_locks", 00:09:38.768 "framework_wait_init", 00:09:38.768 "framework_start_init", 00:09:38.768 "virtio_blk_create_transport", 00:09:38.768 "virtio_blk_get_transports", 00:09:38.768 "vhost_controller_set_coalescing", 00:09:38.768 "vhost_get_controllers", 00:09:38.768 "vhost_delete_controller", 00:09:38.768 "vhost_create_blk_controller", 00:09:38.768 "vhost_scsi_controller_remove_target", 00:09:38.768 "vhost_scsi_controller_add_target", 00:09:38.768 "vhost_start_scsi_controller", 00:09:38.768 "vhost_create_scsi_controller", 00:09:38.768 "nbd_get_disks", 00:09:38.768 "nbd_stop_disk", 00:09:38.768 "nbd_start_disk", 00:09:38.768 "env_dpdk_get_mem_stats", 00:09:38.768 "nvmf_subsystem_get_listeners", 00:09:38.768 "nvmf_subsystem_get_qpairs", 00:09:38.768 "nvmf_subsystem_get_controllers", 00:09:38.768 "nvmf_get_stats", 00:09:38.768 "nvmf_get_transports", 00:09:38.768 "nvmf_create_transport", 00:09:38.768 "nvmf_get_targets", 00:09:38.768 "nvmf_delete_target", 00:09:38.768 "nvmf_create_target", 00:09:38.768 "nvmf_subsystem_allow_any_host", 00:09:38.768 "nvmf_subsystem_remove_host", 00:09:38.768 "nvmf_subsystem_add_host", 00:09:38.768 "nvmf_subsystem_remove_ns", 00:09:38.768 "nvmf_subsystem_add_ns", 00:09:38.768 "nvmf_subsystem_listener_set_ana_state", 00:09:38.768 "nvmf_discovery_get_referrals", 00:09:38.768 "nvmf_discovery_remove_referral", 00:09:38.768 "nvmf_discovery_add_referral", 00:09:38.768 "nvmf_subsystem_remove_listener", 00:09:38.768 "nvmf_subsystem_add_listener", 00:09:38.768 "nvmf_delete_subsystem", 00:09:38.768 "nvmf_create_subsystem", 00:09:38.768 "nvmf_get_subsystems", 00:09:38.768 "nvmf_set_crdt", 00:09:38.768 "nvmf_set_config", 00:09:38.768 "nvmf_set_max_subsystems", 00:09:38.768 "iscsi_set_options", 00:09:38.768 "iscsi_get_auth_groups", 00:09:38.768 "iscsi_auth_group_remove_secret", 00:09:38.768 "iscsi_auth_group_add_secret", 00:09:38.768 "iscsi_delete_auth_group", 00:09:38.768 "iscsi_create_auth_group", 00:09:38.768 "iscsi_set_discovery_auth", 00:09:38.768 "iscsi_get_options", 00:09:38.768 "iscsi_target_node_request_logout", 00:09:38.768 "iscsi_target_node_set_redirect", 00:09:38.768 "iscsi_target_node_set_auth", 00:09:38.768 "iscsi_target_node_add_lun", 00:09:38.768 "iscsi_get_connections", 00:09:38.768 "iscsi_portal_group_set_auth", 00:09:38.768 "iscsi_start_portal_group", 00:09:38.768 "iscsi_delete_portal_group", 00:09:38.768 "iscsi_create_portal_group", 00:09:38.768 "iscsi_get_portal_groups", 00:09:38.768 "iscsi_delete_target_node", 00:09:38.768 "iscsi_target_node_remove_pg_ig_maps", 00:09:38.768 "iscsi_target_node_add_pg_ig_maps", 00:09:38.768 "iscsi_create_target_node", 00:09:38.768 "iscsi_get_target_nodes", 00:09:38.768 "iscsi_delete_initiator_group", 00:09:38.768 "iscsi_initiator_group_remove_initiators", 00:09:38.768 "iscsi_initiator_group_add_initiators", 00:09:38.768 "iscsi_create_initiator_group", 00:09:38.768 "iscsi_get_initiator_groups", 00:09:38.768 "iaa_scan_accel_module", 00:09:38.768 "dsa_scan_accel_module", 00:09:38.768 "ioat_scan_accel_module", 00:09:38.768 "accel_error_inject_error", 00:09:38.768 "bdev_iscsi_delete", 00:09:38.768 "bdev_iscsi_create", 00:09:38.768 "bdev_iscsi_set_options", 00:09:38.768 "bdev_virtio_attach_controller", 00:09:38.768 "bdev_virtio_scsi_get_devices", 00:09:38.768 "bdev_virtio_detach_controller", 00:09:38.768 "bdev_virtio_blk_set_hotplug", 00:09:38.768 "bdev_ftl_set_property", 00:09:38.768 "bdev_ftl_get_properties", 00:09:38.768 "bdev_ftl_get_stats", 00:09:38.768 "bdev_ftl_unmap", 00:09:38.768 "bdev_ftl_unload", 00:09:38.768 "bdev_ftl_delete", 00:09:38.768 "bdev_ftl_load", 00:09:38.768 "bdev_ftl_create", 00:09:38.768 "bdev_aio_delete", 00:09:38.768 "bdev_aio_rescan", 00:09:38.768 "bdev_aio_create", 00:09:38.768 "blobfs_create", 00:09:38.768 "blobfs_detect", 00:09:38.768 "blobfs_set_cache_size", 00:09:38.768 "bdev_zone_block_delete", 00:09:38.768 "bdev_zone_block_create", 00:09:38.768 "bdev_delay_delete", 00:09:38.768 "bdev_delay_create", 00:09:38.768 "bdev_delay_update_latency", 00:09:38.768 "bdev_split_delete", 00:09:38.768 "bdev_split_create", 00:09:38.768 "bdev_error_inject_error", 00:09:38.768 "bdev_error_delete", 00:09:38.768 "bdev_error_create", 00:09:38.768 "bdev_raid_set_options", 00:09:38.768 "bdev_raid_remove_base_bdev", 00:09:38.768 "bdev_raid_add_base_bdev", 00:09:38.768 "bdev_raid_delete", 00:09:38.768 "bdev_raid_create", 00:09:38.768 "bdev_raid_get_bdevs", 00:09:38.768 "bdev_lvol_grow_lvstore", 00:09:38.768 "bdev_lvol_get_lvols", 00:09:38.768 "bdev_lvol_get_lvstores", 00:09:38.768 "bdev_lvol_delete", 00:09:38.768 "bdev_lvol_set_read_only", 00:09:38.768 "bdev_lvol_resize", 00:09:38.768 "bdev_lvol_decouple_parent", 00:09:38.768 "bdev_lvol_inflate", 00:09:38.768 "bdev_lvol_rename", 00:09:38.768 "bdev_lvol_clone_bdev", 00:09:38.768 "bdev_lvol_clone", 00:09:38.768 "bdev_lvol_snapshot", 00:09:38.768 "bdev_lvol_create", 00:09:38.768 "bdev_lvol_delete_lvstore", 00:09:38.768 "bdev_lvol_rename_lvstore", 00:09:38.768 "bdev_lvol_create_lvstore", 00:09:38.768 "bdev_passthru_delete", 00:09:38.768 "bdev_passthru_create", 00:09:38.768 "bdev_nvme_cuse_unregister", 00:09:38.768 "bdev_nvme_cuse_register", 00:09:38.768 "bdev_opal_new_user", 00:09:38.768 "bdev_opal_set_lock_state", 00:09:38.768 "bdev_opal_delete", 00:09:38.768 "bdev_opal_get_info", 00:09:38.768 "bdev_opal_create", 00:09:38.768 "bdev_nvme_opal_revert", 00:09:38.768 "bdev_nvme_opal_init", 00:09:38.768 "bdev_nvme_send_cmd", 00:09:38.768 "bdev_nvme_get_path_iostat", 00:09:38.768 "bdev_nvme_get_mdns_discovery_info", 00:09:38.768 "bdev_nvme_stop_mdns_discovery", 00:09:38.768 "bdev_nvme_start_mdns_discovery", 00:09:38.768 "bdev_nvme_set_multipath_policy", 00:09:38.768 "bdev_nvme_set_preferred_path", 00:09:38.768 "bdev_nvme_get_io_paths", 00:09:38.768 "bdev_nvme_remove_error_injection", 00:09:38.768 "bdev_nvme_add_error_injection", 00:09:38.768 "bdev_nvme_get_discovery_info", 00:09:38.768 "bdev_nvme_stop_discovery", 00:09:38.768 "bdev_nvme_start_discovery", 00:09:38.768 "bdev_nvme_get_controller_health_info", 00:09:38.768 "bdev_nvme_disable_controller", 00:09:38.768 "bdev_nvme_enable_controller", 00:09:38.768 "bdev_nvme_reset_controller", 00:09:38.768 "bdev_nvme_get_transport_statistics", 00:09:38.768 "bdev_nvme_apply_firmware", 00:09:38.768 "bdev_nvme_detach_controller", 00:09:38.768 "bdev_nvme_get_controllers", 00:09:38.768 "bdev_nvme_attach_controller", 00:09:38.768 "bdev_nvme_set_hotplug", 00:09:38.768 "bdev_nvme_set_options", 00:09:38.768 "bdev_null_resize", 00:09:38.768 "bdev_null_delete", 00:09:38.768 "bdev_null_create", 00:09:38.768 "bdev_malloc_delete", 00:09:38.768 "bdev_malloc_create" 00:09:38.768 ] 00:09:38.768 10:35:05 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:38.768 10:35:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:38.768 10:35:05 -- common/autotest_common.sh@10 -- # set +x 00:09:38.768 10:35:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:38.768 10:35:05 -- spdkcli/tcp.sh@38 -- # killprocess 116046 00:09:38.768 10:35:05 -- common/autotest_common.sh@926 -- # '[' -z 116046 ']' 00:09:38.768 10:35:05 -- common/autotest_common.sh@930 -- # kill -0 116046 00:09:38.768 10:35:05 -- common/autotest_common.sh@931 -- # uname 00:09:39.027 10:35:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:39.027 10:35:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116046 00:09:39.027 killing process with pid 116046 00:09:39.027 10:35:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:39.027 10:35:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:39.027 10:35:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116046' 00:09:39.027 10:35:05 -- common/autotest_common.sh@945 -- # kill 116046 00:09:39.027 10:35:05 -- common/autotest_common.sh@950 -- # wait 116046 00:09:39.594 00:09:39.594 real 0m2.020s 00:09:39.594 user 0m3.549s 00:09:39.594 sys 0m0.579s 00:09:39.594 10:35:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.594 ************************************ 00:09:39.594 END TEST spdkcli_tcp 00:09:39.594 ************************************ 00:09:39.594 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 10:35:06 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:39.594 10:35:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:39.594 10:35:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.594 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 ************************************ 00:09:39.594 START TEST dpdk_mem_utility 00:09:39.594 ************************************ 00:09:39.594 10:35:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:39.594 * Looking for test storage... 00:09:39.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:39.594 10:35:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:39.594 10:35:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=116146 00:09:39.594 10:35:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:39.594 10:35:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 116146 00:09:39.594 10:35:06 -- common/autotest_common.sh@819 -- # '[' -z 116146 ']' 00:09:39.594 10:35:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.594 10:35:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:39.594 10:35:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.594 10:35:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:39.594 10:35:06 -- common/autotest_common.sh@10 -- # set +x 00:09:39.860 [2024-07-24 10:35:06.303359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:39.860 [2024-07-24 10:35:06.303679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116146 ] 00:09:39.860 [2024-07-24 10:35:06.448272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.120 [2024-07-24 10:35:06.570898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:40.120 [2024-07-24 10:35:06.571250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.687 10:35:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.687 10:35:07 -- common/autotest_common.sh@852 -- # return 0 00:09:40.687 10:35:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:40.687 10:35:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:40.687 10:35:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.687 10:35:07 -- common/autotest_common.sh@10 -- # set +x 00:09:40.687 { 00:09:40.687 "filename": "/tmp/spdk_mem_dump.txt" 00:09:40.687 } 00:09:40.687 10:35:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.687 10:35:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:40.687 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:40.687 1 heaps totaling size 814.000000 MiB 00:09:40.687 size: 814.000000 MiB heap id: 0 00:09:40.687 end heaps---------- 00:09:40.687 8 mempools totaling size 598.116089 MiB 00:09:40.687 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:40.687 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:40.687 size: 84.521057 MiB name: bdev_io_116146 00:09:40.687 size: 51.011292 MiB name: evtpool_116146 00:09:40.687 size: 50.003479 MiB name: msgpool_116146 00:09:40.687 size: 21.763794 MiB name: PDU_Pool 00:09:40.687 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:40.687 size: 0.026123 MiB name: Session_Pool 00:09:40.687 end mempools------- 00:09:40.687 6 memzones totaling size 4.142822 MiB 00:09:40.687 size: 1.000366 MiB name: RG_ring_0_116146 00:09:40.687 size: 1.000366 MiB name: RG_ring_1_116146 00:09:40.687 size: 1.000366 MiB name: RG_ring_4_116146 00:09:40.687 size: 1.000366 MiB name: RG_ring_5_116146 00:09:40.687 size: 0.125366 MiB name: RG_ring_2_116146 00:09:40.687 size: 0.015991 MiB name: RG_ring_3_116146 00:09:40.687 end memzones------- 00:09:40.687 10:35:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:40.948 heap id: 0 total size: 814.000000 MiB number of busy elements: 214 number of free elements: 15 00:09:40.948 list of free elements. size: 12.487671 MiB 00:09:40.948 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:40.948 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:40.948 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:40.948 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:40.948 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:40.948 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:40.948 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:40.948 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:40.948 element at address: 0x200000200000 with size: 0.837219 MiB 00:09:40.948 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:09:40.948 element at address: 0x20000b200000 with size: 0.489624 MiB 00:09:40.948 element at address: 0x200000800000 with size: 0.487061 MiB 00:09:40.948 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:40.948 element at address: 0x200027e00000 with size: 0.402893 MiB 00:09:40.948 element at address: 0x200003a00000 with size: 0.351685 MiB 00:09:40.948 list of standard malloc elements. size: 199.249756 MiB 00:09:40.948 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:40.948 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:40.948 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:40.948 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:40.948 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:40.948 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:40.948 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:40.948 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:40.948 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:40.948 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:40.948 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:40.949 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e67240 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e67300 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6df00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:40.949 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:40.949 list of memzone associated elements. size: 602.262573 MiB 00:09:40.949 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:40.949 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:40.949 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:40.949 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:40.949 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:40.949 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_116146_0 00:09:40.949 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:40.949 associated memzone info: size: 48.002930 MiB name: MP_evtpool_116146_0 00:09:40.949 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:40.949 associated memzone info: size: 48.002930 MiB name: MP_msgpool_116146_0 00:09:40.949 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:40.949 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:40.949 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:40.949 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:40.949 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:40.949 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_116146 00:09:40.949 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:40.949 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_116146 00:09:40.949 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:40.949 associated memzone info: size: 1.007996 MiB name: MP_evtpool_116146 00:09:40.949 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:40.949 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:40.949 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:40.949 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:40.949 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:40.949 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:40.949 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:40.949 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:40.949 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:40.949 associated memzone info: size: 1.000366 MiB name: RG_ring_0_116146 00:09:40.949 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:40.949 associated memzone info: size: 1.000366 MiB name: RG_ring_1_116146 00:09:40.949 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:40.949 associated memzone info: size: 1.000366 MiB name: RG_ring_4_116146 00:09:40.949 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:40.949 associated memzone info: size: 1.000366 MiB name: RG_ring_5_116146 00:09:40.949 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:40.949 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_116146 00:09:40.949 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:40.949 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:40.949 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:40.949 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:40.949 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:40.949 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:40.949 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:40.949 associated memzone info: size: 0.125366 MiB name: RG_ring_2_116146 00:09:40.949 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:40.949 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:40.949 element at address: 0x200027e673c0 with size: 0.023743 MiB 00:09:40.949 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:40.949 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:40.949 associated memzone info: size: 0.015991 MiB name: RG_ring_3_116146 00:09:40.949 element at address: 0x200027e6d500 with size: 0.002441 MiB 00:09:40.949 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:40.950 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:40.950 associated memzone info: size: 0.000183 MiB name: MP_msgpool_116146 00:09:40.950 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:40.950 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_116146 00:09:40.950 element at address: 0x200027e6dfc0 with size: 0.000305 MiB 00:09:40.950 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:40.950 10:35:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:40.950 10:35:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 116146 00:09:40.950 10:35:07 -- common/autotest_common.sh@926 -- # '[' -z 116146 ']' 00:09:40.950 10:35:07 -- common/autotest_common.sh@930 -- # kill -0 116146 00:09:40.950 10:35:07 -- common/autotest_common.sh@931 -- # uname 00:09:40.950 10:35:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:40.950 10:35:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116146 00:09:40.950 killing process with pid 116146 00:09:40.950 10:35:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:40.950 10:35:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:40.950 10:35:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116146' 00:09:40.950 10:35:07 -- common/autotest_common.sh@945 -- # kill 116146 00:09:40.950 10:35:07 -- common/autotest_common.sh@950 -- # wait 116146 00:09:41.515 00:09:41.515 real 0m1.836s 00:09:41.515 user 0m1.858s 00:09:41.515 sys 0m0.528s 00:09:41.515 10:35:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:41.515 ************************************ 00:09:41.515 END TEST dpdk_mem_utility 00:09:41.515 ************************************ 00:09:41.515 10:35:07 -- common/autotest_common.sh@10 -- # set +x 00:09:41.515 10:35:08 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:41.515 10:35:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:41.515 10:35:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.515 10:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:41.515 ************************************ 00:09:41.515 START TEST event 00:09:41.515 ************************************ 00:09:41.515 10:35:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:41.515 * Looking for test storage... 00:09:41.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:41.515 10:35:08 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:41.515 10:35:08 -- bdev/nbd_common.sh@6 -- # set -e 00:09:41.515 10:35:08 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:41.515 10:35:08 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:41.515 10:35:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:41.515 10:35:08 -- common/autotest_common.sh@10 -- # set +x 00:09:41.515 ************************************ 00:09:41.515 START TEST event_perf 00:09:41.515 ************************************ 00:09:41.515 10:35:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:41.515 Running I/O for 1 seconds...[2024-07-24 10:35:08.159122] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:41.515 [2024-07-24 10:35:08.159399] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116231 ] 00:09:41.772 [2024-07-24 10:35:08.336682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.772 [2024-07-24 10:35:08.451117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.772 [2024-07-24 10:35:08.451392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.772 [2024-07-24 10:35:08.451267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.772 [2024-07-24 10:35:08.451401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.144 Running I/O for 1 seconds... 00:09:43.144 lcore 0: 128233 00:09:43.144 lcore 1: 128234 00:09:43.144 lcore 2: 128235 00:09:43.144 lcore 3: 128238 00:09:43.144 done. 00:09:43.144 ************************************ 00:09:43.144 END TEST event_perf 00:09:43.144 ************************************ 00:09:43.144 00:09:43.144 real 0m1.482s 00:09:43.144 user 0m4.260s 00:09:43.144 sys 0m0.104s 00:09:43.144 10:35:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.144 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:09:43.144 10:35:09 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:43.144 10:35:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:43.144 10:35:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.144 10:35:09 -- common/autotest_common.sh@10 -- # set +x 00:09:43.144 ************************************ 00:09:43.144 START TEST event_reactor 00:09:43.144 ************************************ 00:09:43.144 10:35:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:43.144 [2024-07-24 10:35:09.692202] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:43.144 [2024-07-24 10:35:09.692433] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116277 ] 00:09:43.402 [2024-07-24 10:35:09.834449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.402 [2024-07-24 10:35:09.941779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.776 test_start 00:09:44.776 oneshot 00:09:44.776 tick 100 00:09:44.776 tick 100 00:09:44.776 tick 250 00:09:44.776 tick 100 00:09:44.776 tick 100 00:09:44.776 tick 100 00:09:44.776 tick 250 00:09:44.776 tick 500 00:09:44.776 tick 100 00:09:44.776 tick 100 00:09:44.776 tick 250 00:09:44.776 tick 100 00:09:44.776 tick 100 00:09:44.776 test_end 00:09:44.776 00:09:44.776 real 0m1.412s 00:09:44.776 user 0m1.227s 00:09:44.776 sys 0m0.086s 00:09:44.776 10:35:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.776 ************************************ 00:09:44.776 10:35:11 -- common/autotest_common.sh@10 -- # set +x 00:09:44.776 END TEST event_reactor 00:09:44.776 ************************************ 00:09:44.776 10:35:11 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:44.776 10:35:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:44.776 10:35:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.776 10:35:11 -- common/autotest_common.sh@10 -- # set +x 00:09:44.776 ************************************ 00:09:44.776 START TEST event_reactor_perf 00:09:44.776 ************************************ 00:09:44.776 10:35:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:44.776 [2024-07-24 10:35:11.161268] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:44.776 [2024-07-24 10:35:11.161514] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116319 ] 00:09:44.776 [2024-07-24 10:35:11.311195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.776 [2024-07-24 10:35:11.432242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.151 test_start 00:09:46.151 test_end 00:09:46.151 Performance: 321164 events per second 00:09:46.151 00:09:46.151 real 0m1.429s 00:09:46.151 user 0m1.240s 00:09:46.151 sys 0m0.088s 00:09:46.151 10:35:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.151 10:35:12 -- common/autotest_common.sh@10 -- # set +x 00:09:46.151 ************************************ 00:09:46.151 END TEST event_reactor_perf 00:09:46.151 ************************************ 00:09:46.151 10:35:12 -- event/event.sh@49 -- # uname -s 00:09:46.151 10:35:12 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:46.151 10:35:12 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:46.151 10:35:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:46.151 10:35:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.151 10:35:12 -- common/autotest_common.sh@10 -- # set +x 00:09:46.151 ************************************ 00:09:46.151 START TEST event_scheduler 00:09:46.151 ************************************ 00:09:46.151 10:35:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:46.151 * Looking for test storage... 00:09:46.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:46.151 10:35:12 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:46.151 10:35:12 -- scheduler/scheduler.sh@35 -- # scheduler_pid=116391 00:09:46.151 10:35:12 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:46.151 10:35:12 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.151 10:35:12 -- scheduler/scheduler.sh@37 -- # waitforlisten 116391 00:09:46.151 10:35:12 -- common/autotest_common.sh@819 -- # '[' -z 116391 ']' 00:09:46.151 10:35:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.151 10:35:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.151 10:35:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.151 10:35:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.151 10:35:12 -- common/autotest_common.sh@10 -- # set +x 00:09:46.151 [2024-07-24 10:35:12.773738] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:46.151 [2024-07-24 10:35:12.774011] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116391 ] 00:09:46.409 [2024-07-24 10:35:12.945717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.409 [2024-07-24 10:35:13.067757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.409 [2024-07-24 10:35:13.068025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.409 [2024-07-24 10:35:13.068032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.409 [2024-07-24 10:35:13.068059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.344 10:35:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:47.344 10:35:13 -- common/autotest_common.sh@852 -- # return 0 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 POWER: Env isn't set yet! 00:09:47.344 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:47.344 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:47.344 POWER: Cannot set governor of lcore 0 to userspace 00:09:47.344 POWER: Attempting to initialise PSTAT power management... 00:09:47.344 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:47.344 POWER: Cannot set governor of lcore 0 to performance 00:09:47.344 POWER: Attempting to initialise CPPC power management... 00:09:47.344 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:47.344 POWER: Cannot set governor of lcore 0 to userspace 00:09:47.344 POWER: Attempting to initialise VM power management... 00:09:47.344 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:47.344 POWER: Unable to set Power Management Environment for lcore 0 00:09:47.344 [2024-07-24 10:35:13.710668] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:47.344 [2024-07-24 10:35:13.710757] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:47.344 [2024-07-24 10:35:13.710805] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:47.344 [2024-07-24 10:35:13.710891] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:47.344 [2024-07-24 10:35:13.710941] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:47.344 [2024-07-24 10:35:13.710978] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 [2024-07-24 10:35:13.829808] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:47.344 10:35:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:47.344 10:35:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 ************************************ 00:09:47.344 START TEST scheduler_create_thread 00:09:47.344 ************************************ 00:09:47.344 10:35:13 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 2 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 3 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 4 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 5 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 6 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 7 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 8 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 9 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 10 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:47.344 10:35:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.344 10:35:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:47.344 10:35:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.344 10:35:13 -- common/autotest_common.sh@10 -- # set +x 00:09:48.279 10:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:48.279 10:35:14 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:48.279 10:35:14 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:48.279 10:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:48.279 10:35:14 -- common/autotest_common.sh@10 -- # set +x 00:09:49.656 10:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.656 00:09:49.656 real 0m2.146s 00:09:49.656 user 0m0.020s 00:09:49.656 sys 0m0.012s 00:09:49.656 10:35:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.656 10:35:15 -- common/autotest_common.sh@10 -- # set +x 00:09:49.656 ************************************ 00:09:49.656 END TEST scheduler_create_thread 00:09:49.656 ************************************ 00:09:49.656 10:35:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:49.656 10:35:16 -- scheduler/scheduler.sh@46 -- # killprocess 116391 00:09:49.656 10:35:16 -- common/autotest_common.sh@926 -- # '[' -z 116391 ']' 00:09:49.656 10:35:16 -- common/autotest_common.sh@930 -- # kill -0 116391 00:09:49.656 10:35:16 -- common/autotest_common.sh@931 -- # uname 00:09:49.656 10:35:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:49.656 10:35:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116391 00:09:49.656 10:35:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:49.656 10:35:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:49.656 killing process with pid 116391 00:09:49.656 10:35:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116391' 00:09:49.656 10:35:16 -- common/autotest_common.sh@945 -- # kill 116391 00:09:49.656 10:35:16 -- common/autotest_common.sh@950 -- # wait 116391 00:09:49.914 [2024-07-24 10:35:16.471414] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:50.172 ************************************ 00:09:50.172 END TEST event_scheduler 00:09:50.172 ************************************ 00:09:50.172 00:09:50.172 real 0m4.111s 00:09:50.172 user 0m7.223s 00:09:50.172 sys 0m0.455s 00:09:50.172 10:35:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:50.172 10:35:16 -- common/autotest_common.sh@10 -- # set +x 00:09:50.172 10:35:16 -- event/event.sh@51 -- # modprobe -n nbd 00:09:50.172 10:35:16 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:50.172 10:35:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:50.172 10:35:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:50.172 10:35:16 -- common/autotest_common.sh@10 -- # set +x 00:09:50.172 ************************************ 00:09:50.172 START TEST app_repeat 00:09:50.172 ************************************ 00:09:50.172 10:35:16 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:50.172 10:35:16 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.172 10:35:16 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.172 10:35:16 -- event/event.sh@13 -- # local nbd_list 00:09:50.172 10:35:16 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.172 10:35:16 -- event/event.sh@14 -- # local bdev_list 00:09:50.172 10:35:16 -- event/event.sh@15 -- # local repeat_times=4 00:09:50.172 10:35:16 -- event/event.sh@17 -- # modprobe nbd 00:09:50.172 10:35:16 -- event/event.sh@19 -- # repeat_pid=116499 00:09:50.172 10:35:16 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:50.173 Process app_repeat pid: 116499 00:09:50.173 10:35:16 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 116499' 00:09:50.173 10:35:16 -- event/event.sh@23 -- # for i in {0..2} 00:09:50.173 spdk_app_start Round 0 00:09:50.173 10:35:16 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:50.173 10:35:16 -- event/event.sh@25 -- # waitforlisten 116499 /var/tmp/spdk-nbd.sock 00:09:50.173 10:35:16 -- common/autotest_common.sh@819 -- # '[' -z 116499 ']' 00:09:50.173 10:35:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.173 10:35:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:50.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.173 10:35:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.173 10:35:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:50.173 10:35:16 -- common/autotest_common.sh@10 -- # set +x 00:09:50.173 10:35:16 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:50.173 [2024-07-24 10:35:16.821724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:09:50.173 [2024-07-24 10:35:16.821952] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116499 ] 00:09:50.430 [2024-07-24 10:35:16.974458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:50.430 [2024-07-24 10:35:17.097536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.430 [2024-07-24 10:35:17.097546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.364 10:35:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:51.364 10:35:17 -- common/autotest_common.sh@852 -- # return 0 00:09:51.364 10:35:17 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.621 Malloc0 00:09:51.621 10:35:18 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.879 Malloc1 00:09:51.879 10:35:18 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@12 -- # local i 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.879 10:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:52.136 /dev/nbd0 00:09:52.136 10:35:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:52.136 10:35:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:52.136 10:35:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:52.136 10:35:18 -- common/autotest_common.sh@857 -- # local i 00:09:52.136 10:35:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:52.136 10:35:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:52.136 10:35:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:52.136 10:35:18 -- common/autotest_common.sh@861 -- # break 00:09:52.136 10:35:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:52.136 10:35:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:52.136 10:35:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.136 1+0 records in 00:09:52.136 1+0 records out 00:09:52.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396456 s, 10.3 MB/s 00:09:52.136 10:35:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.136 10:35:18 -- common/autotest_common.sh@874 -- # size=4096 00:09:52.136 10:35:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.136 10:35:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:52.136 10:35:18 -- common/autotest_common.sh@877 -- # return 0 00:09:52.136 10:35:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.136 10:35:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.136 10:35:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:52.393 /dev/nbd1 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.650 10:35:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:52.650 10:35:19 -- common/autotest_common.sh@857 -- # local i 00:09:52.650 10:35:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:52.650 10:35:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:52.650 10:35:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:52.650 10:35:19 -- common/autotest_common.sh@861 -- # break 00:09:52.650 10:35:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:52.650 10:35:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:52.650 10:35:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.650 1+0 records in 00:09:52.650 1+0 records out 00:09:52.650 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467719 s, 8.8 MB/s 00:09:52.650 10:35:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.650 10:35:19 -- common/autotest_common.sh@874 -- # size=4096 00:09:52.650 10:35:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.650 10:35:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:52.650 10:35:19 -- common/autotest_common.sh@877 -- # return 0 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.650 10:35:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.908 10:35:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:52.908 { 00:09:52.908 "nbd_device": "/dev/nbd0", 00:09:52.908 "bdev_name": "Malloc0" 00:09:52.908 }, 00:09:52.908 { 00:09:52.908 "nbd_device": "/dev/nbd1", 00:09:52.908 "bdev_name": "Malloc1" 00:09:52.908 } 00:09:52.908 ]' 00:09:52.908 10:35:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:52.908 { 00:09:52.909 "nbd_device": "/dev/nbd0", 00:09:52.909 "bdev_name": "Malloc0" 00:09:52.909 }, 00:09:52.909 { 00:09:52.909 "nbd_device": "/dev/nbd1", 00:09:52.909 "bdev_name": "Malloc1" 00:09:52.909 } 00:09:52.909 ]' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:52.909 /dev/nbd1' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:52.909 /dev/nbd1' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@65 -- # count=2 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@95 -- # count=2 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:52.909 256+0 records in 00:09:52.909 256+0 records out 00:09:52.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00886589 s, 118 MB/s 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:52.909 256+0 records in 00:09:52.909 256+0 records out 00:09:52.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026589 s, 39.4 MB/s 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:52.909 256+0 records in 00:09:52.909 256+0 records out 00:09:52.909 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322212 s, 32.5 MB/s 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@51 -- # local i 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.909 10:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@41 -- # break 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.167 10:35:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@41 -- # break 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.425 10:35:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@65 -- # true 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@104 -- # count=0 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:53.683 10:35:20 -- bdev/nbd_common.sh@109 -- # return 0 00:09:53.683 10:35:20 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:54.250 10:35:20 -- event/event.sh@35 -- # sleep 3 00:09:54.250 [2024-07-24 10:35:20.830518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.250 [2024-07-24 10:35:20.911104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.250 [2024-07-24 10:35:20.911119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.508 [2024-07-24 10:35:20.965755] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.508 [2024-07-24 10:35:20.965895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:57.038 10:35:23 -- event/event.sh@23 -- # for i in {0..2} 00:09:57.038 spdk_app_start Round 1 00:09:57.038 10:35:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:57.038 10:35:23 -- event/event.sh@25 -- # waitforlisten 116499 /var/tmp/spdk-nbd.sock 00:09:57.038 10:35:23 -- common/autotest_common.sh@819 -- # '[' -z 116499 ']' 00:09:57.038 10:35:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:57.038 10:35:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:57.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:57.038 10:35:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:57.038 10:35:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:57.038 10:35:23 -- common/autotest_common.sh@10 -- # set +x 00:09:57.295 10:35:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:57.295 10:35:23 -- common/autotest_common.sh@852 -- # return 0 00:09:57.295 10:35:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.554 Malloc0 00:09:57.554 10:35:24 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.812 Malloc1 00:09:57.812 10:35:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@12 -- # local i 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.812 10:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:58.071 /dev/nbd0 00:09:58.071 10:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:58.071 10:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:58.071 10:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:58.071 10:35:24 -- common/autotest_common.sh@857 -- # local i 00:09:58.071 10:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:58.071 10:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:58.071 10:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:58.071 10:35:24 -- common/autotest_common.sh@861 -- # break 00:09:58.071 10:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:58.071 10:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:58.071 10:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.071 1+0 records in 00:09:58.071 1+0 records out 00:09:58.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028099 s, 14.6 MB/s 00:09:58.071 10:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.071 10:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:09:58.071 10:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.071 10:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:58.071 10:35:24 -- common/autotest_common.sh@877 -- # return 0 00:09:58.071 10:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.071 10:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.071 10:35:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:58.329 /dev/nbd1 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:58.329 10:35:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:58.329 10:35:24 -- common/autotest_common.sh@857 -- # local i 00:09:58.329 10:35:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:58.329 10:35:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:58.329 10:35:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:58.329 10:35:24 -- common/autotest_common.sh@861 -- # break 00:09:58.329 10:35:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:58.329 10:35:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:58.329 10:35:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.329 1+0 records in 00:09:58.329 1+0 records out 00:09:58.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311261 s, 13.2 MB/s 00:09:58.329 10:35:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.329 10:35:24 -- common/autotest_common.sh@874 -- # size=4096 00:09:58.329 10:35:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.329 10:35:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:58.329 10:35:24 -- common/autotest_common.sh@877 -- # return 0 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.329 10:35:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:58.897 { 00:09:58.897 "nbd_device": "/dev/nbd0", 00:09:58.897 "bdev_name": "Malloc0" 00:09:58.897 }, 00:09:58.897 { 00:09:58.897 "nbd_device": "/dev/nbd1", 00:09:58.897 "bdev_name": "Malloc1" 00:09:58.897 } 00:09:58.897 ]' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:58.897 { 00:09:58.897 "nbd_device": "/dev/nbd0", 00:09:58.897 "bdev_name": "Malloc0" 00:09:58.897 }, 00:09:58.897 { 00:09:58.897 "nbd_device": "/dev/nbd1", 00:09:58.897 "bdev_name": "Malloc1" 00:09:58.897 } 00:09:58.897 ]' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:58.897 /dev/nbd1' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:58.897 /dev/nbd1' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@65 -- # count=2 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@95 -- # count=2 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:58.897 256+0 records in 00:09:58.897 256+0 records out 00:09:58.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00673267 s, 156 MB/s 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:58.897 256+0 records in 00:09:58.897 256+0 records out 00:09:58.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029163 s, 36.0 MB/s 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:58.897 256+0 records in 00:09:58.897 256+0 records out 00:09:58.897 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031361 s, 33.4 MB/s 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@51 -- # local i 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.897 10:35:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@41 -- # break 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.155 10:35:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@41 -- # break 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.413 10:35:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.671 10:35:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.671 10:35:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.671 10:35:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@65 -- # true 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.929 10:35:26 -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.929 10:35:26 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:00.188 10:35:26 -- event/event.sh@35 -- # sleep 3 00:10:00.446 [2024-07-24 10:35:26.972455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.446 [2024-07-24 10:35:27.091146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.446 [2024-07-24 10:35:27.091159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.704 [2024-07-24 10:35:27.163058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:00.704 [2024-07-24 10:35:27.163175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:03.234 spdk_app_start Round 2 00:10:03.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:03.234 10:35:29 -- event/event.sh@23 -- # for i in {0..2} 00:10:03.234 10:35:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:03.234 10:35:29 -- event/event.sh@25 -- # waitforlisten 116499 /var/tmp/spdk-nbd.sock 00:10:03.234 10:35:29 -- common/autotest_common.sh@819 -- # '[' -z 116499 ']' 00:10:03.234 10:35:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:03.234 10:35:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:03.234 10:35:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:03.234 10:35:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:03.234 10:35:29 -- common/autotest_common.sh@10 -- # set +x 00:10:03.533 10:35:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:03.533 10:35:29 -- common/autotest_common.sh@852 -- # return 0 00:10:03.533 10:35:29 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:03.791 Malloc0 00:10:03.791 10:35:30 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:04.049 Malloc1 00:10:04.049 10:35:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.049 10:35:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:04.050 10:35:30 -- bdev/nbd_common.sh@12 -- # local i 00:10:04.050 10:35:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:04.050 10:35:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.050 10:35:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:04.308 /dev/nbd0 00:10:04.308 10:35:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:04.308 10:35:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:04.308 10:35:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:10:04.308 10:35:30 -- common/autotest_common.sh@857 -- # local i 00:10:04.308 10:35:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:04.308 10:35:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:04.308 10:35:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:10:04.308 10:35:30 -- common/autotest_common.sh@861 -- # break 00:10:04.308 10:35:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:04.308 10:35:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:04.308 10:35:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:04.308 1+0 records in 00:10:04.308 1+0 records out 00:10:04.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301091 s, 13.6 MB/s 00:10:04.308 10:35:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.308 10:35:30 -- common/autotest_common.sh@874 -- # size=4096 00:10:04.308 10:35:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.308 10:35:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:04.308 10:35:30 -- common/autotest_common.sh@877 -- # return 0 00:10:04.308 10:35:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.308 10:35:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.308 10:35:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:04.566 /dev/nbd1 00:10:04.566 10:35:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:04.566 10:35:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:04.566 10:35:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:10:04.566 10:35:31 -- common/autotest_common.sh@857 -- # local i 00:10:04.566 10:35:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:10:04.566 10:35:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:10:04.566 10:35:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:10:04.566 10:35:31 -- common/autotest_common.sh@861 -- # break 00:10:04.566 10:35:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:10:04.566 10:35:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:10:04.566 10:35:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:04.566 1+0 records in 00:10:04.567 1+0 records out 00:10:04.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268157 s, 15.3 MB/s 00:10:04.567 10:35:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.567 10:35:31 -- common/autotest_common.sh@874 -- # size=4096 00:10:04.567 10:35:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:04.567 10:35:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:10:04.567 10:35:31 -- common/autotest_common.sh@877 -- # return 0 00:10:04.567 10:35:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:04.567 10:35:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:04.567 10:35:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:04.567 10:35:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:04.567 10:35:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:04.825 { 00:10:04.825 "nbd_device": "/dev/nbd0", 00:10:04.825 "bdev_name": "Malloc0" 00:10:04.825 }, 00:10:04.825 { 00:10:04.825 "nbd_device": "/dev/nbd1", 00:10:04.825 "bdev_name": "Malloc1" 00:10:04.825 } 00:10:04.825 ]' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:04.825 { 00:10:04.825 "nbd_device": "/dev/nbd0", 00:10:04.825 "bdev_name": "Malloc0" 00:10:04.825 }, 00:10:04.825 { 00:10:04.825 "nbd_device": "/dev/nbd1", 00:10:04.825 "bdev_name": "Malloc1" 00:10:04.825 } 00:10:04.825 ]' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:04.825 /dev/nbd1' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:04.825 /dev/nbd1' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@65 -- # count=2 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@95 -- # count=2 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:04.825 256+0 records in 00:10:04.825 256+0 records out 00:10:04.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00528943 s, 198 MB/s 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:04.825 10:35:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:05.083 256+0 records in 00:10:05.083 256+0 records out 00:10:05.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02631 s, 39.9 MB/s 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:05.083 256+0 records in 00:10:05.083 256+0 records out 00:10:05.083 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299079 s, 35.1 MB/s 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@51 -- # local i 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.083 10:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@41 -- # break 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:05.342 10:35:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@41 -- # break 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@45 -- # return 0 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:05.600 10:35:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@65 -- # true 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@65 -- # count=0 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@104 -- # count=0 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:05.858 10:35:32 -- bdev/nbd_common.sh@109 -- # return 0 00:10:05.858 10:35:32 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:06.116 10:35:32 -- event/event.sh@35 -- # sleep 3 00:10:06.375 [2024-07-24 10:35:32.928105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.375 [2024-07-24 10:35:33.015940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.375 [2024-07-24 10:35:33.015949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.633 [2024-07-24 10:35:33.074002] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:06.633 [2024-07-24 10:35:33.074110] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:09.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:09.166 10:35:35 -- event/event.sh@38 -- # waitforlisten 116499 /var/tmp/spdk-nbd.sock 00:10:09.166 10:35:35 -- common/autotest_common.sh@819 -- # '[' -z 116499 ']' 00:10:09.166 10:35:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:09.166 10:35:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:09.166 10:35:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:09.166 10:35:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:09.166 10:35:35 -- common/autotest_common.sh@10 -- # set +x 00:10:09.424 10:35:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.424 10:35:36 -- common/autotest_common.sh@852 -- # return 0 00:10:09.424 10:35:36 -- event/event.sh@39 -- # killprocess 116499 00:10:09.424 10:35:36 -- common/autotest_common.sh@926 -- # '[' -z 116499 ']' 00:10:09.424 10:35:36 -- common/autotest_common.sh@930 -- # kill -0 116499 00:10:09.424 10:35:36 -- common/autotest_common.sh@931 -- # uname 00:10:09.424 10:35:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:09.424 10:35:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116499 00:10:09.424 10:35:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:09.424 10:35:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:09.424 10:35:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116499' 00:10:09.424 killing process with pid 116499 00:10:09.424 10:35:36 -- common/autotest_common.sh@945 -- # kill 116499 00:10:09.424 10:35:36 -- common/autotest_common.sh@950 -- # wait 116499 00:10:09.683 spdk_app_start is called in Round 0. 00:10:09.683 Shutdown signal received, stop current app iteration 00:10:09.683 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:10:09.683 spdk_app_start is called in Round 1. 00:10:09.683 Shutdown signal received, stop current app iteration 00:10:09.683 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:10:09.683 spdk_app_start is called in Round 2. 00:10:09.683 Shutdown signal received, stop current app iteration 00:10:09.683 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:10:09.683 spdk_app_start is called in Round 3. 00:10:09.683 Shutdown signal received, stop current app iteration 00:10:09.683 10:35:36 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:09.683 10:35:36 -- event/event.sh@42 -- # return 0 00:10:09.683 00:10:09.683 real 0m19.560s 00:10:09.683 user 0m44.029s 00:10:09.683 sys 0m2.918s 00:10:09.683 10:35:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.683 10:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.683 ************************************ 00:10:09.683 END TEST app_repeat 00:10:09.683 ************************************ 00:10:09.941 10:35:36 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:09.941 10:35:36 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:09.941 10:35:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:09.941 10:35:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.941 10:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.941 ************************************ 00:10:09.941 START TEST cpu_locks 00:10:09.941 ************************************ 00:10:09.941 10:35:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:09.941 * Looking for test storage... 00:10:09.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:09.941 10:35:36 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:09.941 10:35:36 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:09.942 10:35:36 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:09.942 10:35:36 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:09.942 10:35:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:09.942 10:35:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:09.942 10:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.942 ************************************ 00:10:09.942 START TEST default_locks 00:10:09.942 ************************************ 00:10:09.942 10:35:36 -- common/autotest_common.sh@1104 -- # default_locks 00:10:09.942 10:35:36 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=117017 00:10:09.942 10:35:36 -- event/cpu_locks.sh@47 -- # waitforlisten 117017 00:10:09.942 10:35:36 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:09.942 10:35:36 -- common/autotest_common.sh@819 -- # '[' -z 117017 ']' 00:10:09.942 10:35:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.942 10:35:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:09.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.942 10:35:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.942 10:35:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:09.942 10:35:36 -- common/autotest_common.sh@10 -- # set +x 00:10:09.942 [2024-07-24 10:35:36.565098] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:09.942 [2024-07-24 10:35:36.565356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117017 ] 00:10:10.199 [2024-07-24 10:35:36.706820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.199 [2024-07-24 10:35:36.828831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:10.199 [2024-07-24 10:35:36.829133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.133 10:35:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:11.133 10:35:37 -- common/autotest_common.sh@852 -- # return 0 00:10:11.133 10:35:37 -- event/cpu_locks.sh@49 -- # locks_exist 117017 00:10:11.133 10:35:37 -- event/cpu_locks.sh@22 -- # lslocks -p 117017 00:10:11.133 10:35:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:11.133 10:35:37 -- event/cpu_locks.sh@50 -- # killprocess 117017 00:10:11.133 10:35:37 -- common/autotest_common.sh@926 -- # '[' -z 117017 ']' 00:10:11.133 10:35:37 -- common/autotest_common.sh@930 -- # kill -0 117017 00:10:11.133 10:35:37 -- common/autotest_common.sh@931 -- # uname 00:10:11.133 10:35:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:11.133 10:35:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117017 00:10:11.133 10:35:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:11.133 10:35:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:11.133 killing process with pid 117017 00:10:11.133 10:35:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117017' 00:10:11.133 10:35:37 -- common/autotest_common.sh@945 -- # kill 117017 00:10:11.133 10:35:37 -- common/autotest_common.sh@950 -- # wait 117017 00:10:11.701 10:35:38 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 117017 00:10:11.701 10:35:38 -- common/autotest_common.sh@640 -- # local es=0 00:10:11.701 10:35:38 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 117017 00:10:11.701 10:35:38 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:11.701 10:35:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.701 10:35:38 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:11.701 10:35:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:11.701 10:35:38 -- common/autotest_common.sh@643 -- # waitforlisten 117017 00:10:11.701 10:35:38 -- common/autotest_common.sh@819 -- # '[' -z 117017 ']' 00:10:11.701 10:35:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.701 10:35:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.701 10:35:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.701 10:35:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.701 10:35:38 -- common/autotest_common.sh@10 -- # set +x 00:10:11.701 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (117017) - No such process 00:10:11.701 ERROR: process (pid: 117017) is no longer running 00:10:11.701 10:35:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:11.701 10:35:38 -- common/autotest_common.sh@852 -- # return 1 00:10:11.701 10:35:38 -- common/autotest_common.sh@643 -- # es=1 00:10:11.701 10:35:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:11.701 10:35:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:11.701 10:35:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:11.701 10:35:38 -- event/cpu_locks.sh@54 -- # no_locks 00:10:11.701 10:35:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:11.701 10:35:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:11.701 10:35:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:11.701 00:10:11.701 real 0m1.880s 00:10:11.701 user 0m1.916s 00:10:11.701 sys 0m0.612s 00:10:11.701 10:35:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.701 10:35:38 -- common/autotest_common.sh@10 -- # set +x 00:10:11.701 ************************************ 00:10:11.701 END TEST default_locks 00:10:11.701 ************************************ 00:10:11.960 10:35:38 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:11.960 10:35:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:11.960 10:35:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.960 10:35:38 -- common/autotest_common.sh@10 -- # set +x 00:10:11.960 ************************************ 00:10:11.960 START TEST default_locks_via_rpc 00:10:11.960 ************************************ 00:10:11.960 10:35:38 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:10:11.960 10:35:38 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=117073 00:10:11.960 10:35:38 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:11.960 10:35:38 -- event/cpu_locks.sh@63 -- # waitforlisten 117073 00:10:11.960 10:35:38 -- common/autotest_common.sh@819 -- # '[' -z 117073 ']' 00:10:11.960 10:35:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.960 10:35:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.960 10:35:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.960 10:35:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.960 10:35:38 -- common/autotest_common.sh@10 -- # set +x 00:10:11.960 [2024-07-24 10:35:38.503454] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:11.960 [2024-07-24 10:35:38.503755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117073 ] 00:10:12.219 [2024-07-24 10:35:38.661287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.219 [2024-07-24 10:35:38.784496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:12.219 [2024-07-24 10:35:38.784782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.164 10:35:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:13.164 10:35:39 -- common/autotest_common.sh@852 -- # return 0 00:10:13.164 10:35:39 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:13.164 10:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:13.164 10:35:39 -- common/autotest_common.sh@10 -- # set +x 00:10:13.164 10:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:13.164 10:35:39 -- event/cpu_locks.sh@67 -- # no_locks 00:10:13.164 10:35:39 -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:13.164 10:35:39 -- event/cpu_locks.sh@26 -- # local lock_files 00:10:13.164 10:35:39 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:13.164 10:35:39 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:13.164 10:35:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:13.164 10:35:39 -- common/autotest_common.sh@10 -- # set +x 00:10:13.164 10:35:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:13.164 10:35:39 -- event/cpu_locks.sh@71 -- # locks_exist 117073 00:10:13.164 10:35:39 -- event/cpu_locks.sh@22 -- # lslocks -p 117073 00:10:13.164 10:35:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:13.164 10:35:39 -- event/cpu_locks.sh@73 -- # killprocess 117073 00:10:13.164 10:35:39 -- common/autotest_common.sh@926 -- # '[' -z 117073 ']' 00:10:13.164 10:35:39 -- common/autotest_common.sh@930 -- # kill -0 117073 00:10:13.164 10:35:39 -- common/autotest_common.sh@931 -- # uname 00:10:13.164 10:35:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:13.164 10:35:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117073 00:10:13.164 10:35:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:13.164 10:35:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:13.164 killing process with pid 117073 00:10:13.164 10:35:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117073' 00:10:13.164 10:35:39 -- common/autotest_common.sh@945 -- # kill 117073 00:10:13.164 10:35:39 -- common/autotest_common.sh@950 -- # wait 117073 00:10:13.730 00:10:13.730 real 0m1.961s 00:10:13.730 user 0m1.994s 00:10:13.730 sys 0m0.647s 00:10:13.730 10:35:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.730 10:35:40 -- common/autotest_common.sh@10 -- # set +x 00:10:13.730 ************************************ 00:10:13.730 END TEST default_locks_via_rpc 00:10:13.730 ************************************ 00:10:13.988 10:35:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:13.988 10:35:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:13.988 10:35:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.988 10:35:40 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 ************************************ 00:10:13.988 START TEST non_locking_app_on_locked_coremask 00:10:13.988 ************************************ 00:10:13.988 10:35:40 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:10:13.988 10:35:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=117133 00:10:13.988 10:35:40 -- event/cpu_locks.sh@81 -- # waitforlisten 117133 /var/tmp/spdk.sock 00:10:13.988 10:35:40 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:13.988 10:35:40 -- common/autotest_common.sh@819 -- # '[' -z 117133 ']' 00:10:13.988 10:35:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.988 10:35:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:13.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.988 10:35:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.988 10:35:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:13.988 10:35:40 -- common/autotest_common.sh@10 -- # set +x 00:10:13.988 [2024-07-24 10:35:40.508782] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:13.988 [2024-07-24 10:35:40.509094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117133 ] 00:10:13.988 [2024-07-24 10:35:40.658112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.247 [2024-07-24 10:35:40.780530] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:14.247 [2024-07-24 10:35:40.780813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.814 10:35:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:14.814 10:35:41 -- common/autotest_common.sh@852 -- # return 0 00:10:14.814 10:35:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=117154 00:10:14.814 10:35:41 -- event/cpu_locks.sh@85 -- # waitforlisten 117154 /var/tmp/spdk2.sock 00:10:14.814 10:35:41 -- common/autotest_common.sh@819 -- # '[' -z 117154 ']' 00:10:14.814 10:35:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:14.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:14.814 10:35:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:14.814 10:35:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:14.814 10:35:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:14.814 10:35:41 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:14.814 10:35:41 -- common/autotest_common.sh@10 -- # set +x 00:10:15.073 [2024-07-24 10:35:41.530141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:15.073 [2024-07-24 10:35:41.530376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117154 ] 00:10:15.073 [2024-07-24 10:35:41.673663] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.073 [2024-07-24 10:35:41.673786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.331 [2024-07-24 10:35:41.866100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:15.331 [2024-07-24 10:35:41.866494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.898 10:35:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:15.898 10:35:42 -- common/autotest_common.sh@852 -- # return 0 00:10:15.898 10:35:42 -- event/cpu_locks.sh@87 -- # locks_exist 117133 00:10:15.898 10:35:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:15.898 10:35:42 -- event/cpu_locks.sh@22 -- # lslocks -p 117133 00:10:16.465 10:35:42 -- event/cpu_locks.sh@89 -- # killprocess 117133 00:10:16.465 10:35:42 -- common/autotest_common.sh@926 -- # '[' -z 117133 ']' 00:10:16.465 10:35:42 -- common/autotest_common.sh@930 -- # kill -0 117133 00:10:16.465 10:35:42 -- common/autotest_common.sh@931 -- # uname 00:10:16.465 10:35:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:16.465 10:35:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117133 00:10:16.465 10:35:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:16.465 10:35:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:16.465 killing process with pid 117133 00:10:16.465 10:35:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117133' 00:10:16.465 10:35:42 -- common/autotest_common.sh@945 -- # kill 117133 00:10:16.465 10:35:42 -- common/autotest_common.sh@950 -- # wait 117133 00:10:17.400 10:35:43 -- event/cpu_locks.sh@90 -- # killprocess 117154 00:10:17.400 10:35:43 -- common/autotest_common.sh@926 -- # '[' -z 117154 ']' 00:10:17.400 10:35:43 -- common/autotest_common.sh@930 -- # kill -0 117154 00:10:17.400 10:35:43 -- common/autotest_common.sh@931 -- # uname 00:10:17.400 10:35:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:17.400 10:35:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117154 00:10:17.400 10:35:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:17.400 killing process with pid 117154 00:10:17.400 10:35:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:17.400 10:35:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117154' 00:10:17.400 10:35:43 -- common/autotest_common.sh@945 -- # kill 117154 00:10:17.400 10:35:43 -- common/autotest_common.sh@950 -- # wait 117154 00:10:17.659 00:10:17.659 real 0m3.839s 00:10:17.659 user 0m4.136s 00:10:17.659 sys 0m1.178s 00:10:17.659 10:35:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.659 10:35:44 -- common/autotest_common.sh@10 -- # set +x 00:10:17.659 ************************************ 00:10:17.659 END TEST non_locking_app_on_locked_coremask 00:10:17.659 ************************************ 00:10:17.659 10:35:44 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:17.659 10:35:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:17.659 10:35:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:17.659 10:35:44 -- common/autotest_common.sh@10 -- # set +x 00:10:17.659 ************************************ 00:10:17.659 START TEST locking_app_on_unlocked_coremask 00:10:17.659 ************************************ 00:10:17.659 10:35:44 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:10:17.660 10:35:44 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=117228 00:10:17.660 10:35:44 -- event/cpu_locks.sh@99 -- # waitforlisten 117228 /var/tmp/spdk.sock 00:10:17.660 10:35:44 -- common/autotest_common.sh@819 -- # '[' -z 117228 ']' 00:10:17.660 10:35:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.660 10:35:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:17.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.660 10:35:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.660 10:35:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:17.660 10:35:44 -- common/autotest_common.sh@10 -- # set +x 00:10:17.660 10:35:44 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:17.918 [2024-07-24 10:35:44.398987] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:17.918 [2024-07-24 10:35:44.399476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117228 ] 00:10:17.918 [2024-07-24 10:35:44.543925] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:17.918 [2024-07-24 10:35:44.544024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.177 [2024-07-24 10:35:44.647048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:18.177 [2024-07-24 10:35:44.647376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.744 10:35:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:18.744 10:35:45 -- common/autotest_common.sh@852 -- # return 0 00:10:18.744 10:35:45 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117251 00:10:18.744 10:35:45 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:18.744 10:35:45 -- event/cpu_locks.sh@103 -- # waitforlisten 117251 /var/tmp/spdk2.sock 00:10:18.744 10:35:45 -- common/autotest_common.sh@819 -- # '[' -z 117251 ']' 00:10:18.744 10:35:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:18.744 10:35:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:18.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:18.744 10:35:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:18.744 10:35:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:18.744 10:35:45 -- common/autotest_common.sh@10 -- # set +x 00:10:18.744 [2024-07-24 10:35:45.415944] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:18.745 [2024-07-24 10:35:45.416187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117251 ] 00:10:19.010 [2024-07-24 10:35:45.571152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.403 [2024-07-24 10:35:45.769386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:19.403 [2024-07-24 10:35:45.769631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.973 10:35:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:19.973 10:35:46 -- common/autotest_common.sh@852 -- # return 0 00:10:19.973 10:35:46 -- event/cpu_locks.sh@105 -- # locks_exist 117251 00:10:19.973 10:35:46 -- event/cpu_locks.sh@22 -- # lslocks -p 117251 00:10:19.973 10:35:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:20.539 10:35:46 -- event/cpu_locks.sh@107 -- # killprocess 117228 00:10:20.539 10:35:46 -- common/autotest_common.sh@926 -- # '[' -z 117228 ']' 00:10:20.539 10:35:46 -- common/autotest_common.sh@930 -- # kill -0 117228 00:10:20.539 10:35:46 -- common/autotest_common.sh@931 -- # uname 00:10:20.539 10:35:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:20.539 10:35:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117228 00:10:20.539 10:35:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:20.539 10:35:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:20.539 killing process with pid 117228 00:10:20.539 10:35:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117228' 00:10:20.539 10:35:46 -- common/autotest_common.sh@945 -- # kill 117228 00:10:20.539 10:35:46 -- common/autotest_common.sh@950 -- # wait 117228 00:10:21.475 10:35:47 -- event/cpu_locks.sh@108 -- # killprocess 117251 00:10:21.475 10:35:47 -- common/autotest_common.sh@926 -- # '[' -z 117251 ']' 00:10:21.475 10:35:47 -- common/autotest_common.sh@930 -- # kill -0 117251 00:10:21.475 10:35:47 -- common/autotest_common.sh@931 -- # uname 00:10:21.475 10:35:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:21.475 10:35:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117251 00:10:21.475 10:35:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:21.475 10:35:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:21.475 killing process with pid 117251 00:10:21.475 10:35:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117251' 00:10:21.475 10:35:47 -- common/autotest_common.sh@945 -- # kill 117251 00:10:21.475 10:35:47 -- common/autotest_common.sh@950 -- # wait 117251 00:10:22.042 00:10:22.042 real 0m4.225s 00:10:22.042 user 0m4.560s 00:10:22.042 sys 0m1.210s 00:10:22.042 10:35:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.042 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 ************************************ 00:10:22.042 END TEST locking_app_on_unlocked_coremask 00:10:22.042 ************************************ 00:10:22.042 10:35:48 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:22.042 10:35:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:22.042 10:35:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.042 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 ************************************ 00:10:22.042 START TEST locking_app_on_locked_coremask 00:10:22.042 ************************************ 00:10:22.042 10:35:48 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:22.042 10:35:48 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117325 00:10:22.042 10:35:48 -- event/cpu_locks.sh@116 -- # waitforlisten 117325 /var/tmp/spdk.sock 00:10:22.042 10:35:48 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:22.042 10:35:48 -- common/autotest_common.sh@819 -- # '[' -z 117325 ']' 00:10:22.042 10:35:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.042 10:35:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:22.042 10:35:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.042 10:35:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:22.042 10:35:48 -- common/autotest_common.sh@10 -- # set +x 00:10:22.042 [2024-07-24 10:35:48.674604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:22.042 [2024-07-24 10:35:48.674867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117325 ] 00:10:22.301 [2024-07-24 10:35:48.823679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.301 [2024-07-24 10:35:48.920831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:22.301 [2024-07-24 10:35:48.921126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.235 10:35:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:23.235 10:35:49 -- common/autotest_common.sh@852 -- # return 0 00:10:23.235 10:35:49 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117346 00:10:23.235 10:35:49 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117346 /var/tmp/spdk2.sock 00:10:23.235 10:35:49 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:23.235 10:35:49 -- common/autotest_common.sh@640 -- # local es=0 00:10:23.235 10:35:49 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 117346 /var/tmp/spdk2.sock 00:10:23.235 10:35:49 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:23.235 10:35:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:23.235 10:35:49 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:23.235 10:35:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:23.235 10:35:49 -- common/autotest_common.sh@643 -- # waitforlisten 117346 /var/tmp/spdk2.sock 00:10:23.235 10:35:49 -- common/autotest_common.sh@819 -- # '[' -z 117346 ']' 00:10:23.235 10:35:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:23.235 10:35:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:23.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:23.235 10:35:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:23.235 10:35:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:23.235 10:35:49 -- common/autotest_common.sh@10 -- # set +x 00:10:23.235 [2024-07-24 10:35:49.698474] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:23.235 [2024-07-24 10:35:49.698721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117346 ] 00:10:23.235 [2024-07-24 10:35:49.842121] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117325 has claimed it. 00:10:23.235 [2024-07-24 10:35:49.842226] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:24.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (117346) - No such process 00:10:24.170 ERROR: process (pid: 117346) is no longer running 00:10:24.170 10:35:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:24.170 10:35:50 -- common/autotest_common.sh@852 -- # return 1 00:10:24.170 10:35:50 -- common/autotest_common.sh@643 -- # es=1 00:10:24.170 10:35:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:24.170 10:35:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:24.170 10:35:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:24.170 10:35:50 -- event/cpu_locks.sh@122 -- # locks_exist 117325 00:10:24.170 10:35:50 -- event/cpu_locks.sh@22 -- # lslocks -p 117325 00:10:24.170 10:35:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:24.170 10:35:50 -- event/cpu_locks.sh@124 -- # killprocess 117325 00:10:24.170 10:35:50 -- common/autotest_common.sh@926 -- # '[' -z 117325 ']' 00:10:24.170 10:35:50 -- common/autotest_common.sh@930 -- # kill -0 117325 00:10:24.170 10:35:50 -- common/autotest_common.sh@931 -- # uname 00:10:24.170 10:35:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:24.170 10:35:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117325 00:10:24.170 10:35:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:24.170 10:35:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:24.170 killing process with pid 117325 00:10:24.170 10:35:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117325' 00:10:24.170 10:35:50 -- common/autotest_common.sh@945 -- # kill 117325 00:10:24.170 10:35:50 -- common/autotest_common.sh@950 -- # wait 117325 00:10:24.740 00:10:24.740 real 0m2.616s 00:10:24.741 user 0m3.055s 00:10:24.741 sys 0m0.681s 00:10:24.741 10:35:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.741 10:35:51 -- common/autotest_common.sh@10 -- # set +x 00:10:24.741 ************************************ 00:10:24.741 END TEST locking_app_on_locked_coremask 00:10:24.741 ************************************ 00:10:24.741 10:35:51 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:24.741 10:35:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:24.741 10:35:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.741 10:35:51 -- common/autotest_common.sh@10 -- # set +x 00:10:24.741 ************************************ 00:10:24.741 START TEST locking_overlapped_coremask 00:10:24.741 ************************************ 00:10:24.741 10:35:51 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:24.741 10:35:51 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117398 00:10:24.741 10:35:51 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:24.741 10:35:51 -- event/cpu_locks.sh@133 -- # waitforlisten 117398 /var/tmp/spdk.sock 00:10:24.741 10:35:51 -- common/autotest_common.sh@819 -- # '[' -z 117398 ']' 00:10:24.741 10:35:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.741 10:35:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:24.741 10:35:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.741 10:35:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:24.741 10:35:51 -- common/autotest_common.sh@10 -- # set +x 00:10:24.741 [2024-07-24 10:35:51.335587] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:24.741 [2024-07-24 10:35:51.335826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117398 ] 00:10:24.999 [2024-07-24 10:35:51.493227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.999 [2024-07-24 10:35:51.590168] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:24.999 [2024-07-24 10:35:51.590593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.999 [2024-07-24 10:35:51.590735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.999 [2024-07-24 10:35:51.590732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.565 10:35:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:25.565 10:35:52 -- common/autotest_common.sh@852 -- # return 0 00:10:25.565 10:35:52 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117421 00:10:25.565 10:35:52 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117421 /var/tmp/spdk2.sock 00:10:25.565 10:35:52 -- common/autotest_common.sh@640 -- # local es=0 00:10:25.565 10:35:52 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 117421 /var/tmp/spdk2.sock 00:10:25.565 10:35:52 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:25.565 10:35:52 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:25.565 10:35:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:25.566 10:35:52 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:25.566 10:35:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:25.566 10:35:52 -- common/autotest_common.sh@643 -- # waitforlisten 117421 /var/tmp/spdk2.sock 00:10:25.566 10:35:52 -- common/autotest_common.sh@819 -- # '[' -z 117421 ']' 00:10:25.566 10:35:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:25.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:25.566 10:35:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:25.566 10:35:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:25.566 10:35:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:25.566 10:35:52 -- common/autotest_common.sh@10 -- # set +x 00:10:25.824 [2024-07-24 10:35:52.292529] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:25.824 [2024-07-24 10:35:52.292744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117421 ] 00:10:25.824 [2024-07-24 10:35:52.453037] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117398 has claimed it. 00:10:25.824 [2024-07-24 10:35:52.453172] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:26.391 ERROR: process (pid: 117421) is no longer running 00:10:26.391 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (117421) - No such process 00:10:26.391 10:35:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:26.391 10:35:52 -- common/autotest_common.sh@852 -- # return 1 00:10:26.391 10:35:52 -- common/autotest_common.sh@643 -- # es=1 00:10:26.391 10:35:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:26.391 10:35:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:26.391 10:35:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:26.391 10:35:52 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:26.391 10:35:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:26.391 10:35:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:26.391 10:35:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:26.391 10:35:52 -- event/cpu_locks.sh@141 -- # killprocess 117398 00:10:26.391 10:35:52 -- common/autotest_common.sh@926 -- # '[' -z 117398 ']' 00:10:26.391 10:35:52 -- common/autotest_common.sh@930 -- # kill -0 117398 00:10:26.391 10:35:52 -- common/autotest_common.sh@931 -- # uname 00:10:26.391 10:35:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:26.391 10:35:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117398 00:10:26.391 10:35:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:26.391 killing process with pid 117398 00:10:26.391 10:35:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:26.391 10:35:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117398' 00:10:26.391 10:35:52 -- common/autotest_common.sh@945 -- # kill 117398 00:10:26.391 10:35:52 -- common/autotest_common.sh@950 -- # wait 117398 00:10:26.957 00:10:26.957 real 0m2.153s 00:10:26.957 user 0m5.745s 00:10:26.957 sys 0m0.457s 00:10:26.957 ************************************ 00:10:26.957 END TEST locking_overlapped_coremask 00:10:26.957 ************************************ 00:10:26.957 10:35:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.957 10:35:53 -- common/autotest_common.sh@10 -- # set +x 00:10:26.957 10:35:53 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:26.957 10:35:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:26.957 10:35:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:26.957 10:35:53 -- common/autotest_common.sh@10 -- # set +x 00:10:26.957 ************************************ 00:10:26.957 START TEST locking_overlapped_coremask_via_rpc 00:10:26.957 ************************************ 00:10:26.957 10:35:53 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:26.957 10:35:53 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=117466 00:10:26.957 10:35:53 -- event/cpu_locks.sh@149 -- # waitforlisten 117466 /var/tmp/spdk.sock 00:10:26.957 10:35:53 -- common/autotest_common.sh@819 -- # '[' -z 117466 ']' 00:10:26.957 10:35:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.957 10:35:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:26.957 10:35:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.957 10:35:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:26.957 10:35:53 -- common/autotest_common.sh@10 -- # set +x 00:10:26.957 10:35:53 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:26.957 [2024-07-24 10:35:53.549122] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:26.957 [2024-07-24 10:35:53.549938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117466 ] 00:10:27.215 [2024-07-24 10:35:53.716134] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:27.215 [2024-07-24 10:35:53.716252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.215 [2024-07-24 10:35:53.812448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:27.215 [2024-07-24 10:35:53.812888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.215 [2024-07-24 10:35:53.813023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.215 [2024-07-24 10:35:53.813031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.151 10:35:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:28.151 10:35:54 -- common/autotest_common.sh@852 -- # return 0 00:10:28.151 10:35:54 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=117489 00:10:28.151 10:35:54 -- event/cpu_locks.sh@153 -- # waitforlisten 117489 /var/tmp/spdk2.sock 00:10:28.151 10:35:54 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:28.151 10:35:54 -- common/autotest_common.sh@819 -- # '[' -z 117489 ']' 00:10:28.151 10:35:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:28.151 10:35:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:28.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:28.151 10:35:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:28.151 10:35:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:28.151 10:35:54 -- common/autotest_common.sh@10 -- # set +x 00:10:28.151 [2024-07-24 10:35:54.568877] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:28.151 [2024-07-24 10:35:54.569067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117489 ] 00:10:28.151 [2024-07-24 10:35:54.753604] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:28.151 [2024-07-24 10:35:54.753732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:28.409 [2024-07-24 10:35:54.936393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:28.409 [2024-07-24 10:35:54.936759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.409 [2024-07-24 10:35:54.936873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:28.409 [2024-07-24 10:35:54.936882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.976 10:35:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:28.976 10:35:55 -- common/autotest_common.sh@852 -- # return 0 00:10:28.976 10:35:55 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:28.976 10:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:28.976 10:35:55 -- common/autotest_common.sh@10 -- # set +x 00:10:28.976 10:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:28.976 10:35:55 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:28.976 10:35:55 -- common/autotest_common.sh@640 -- # local es=0 00:10:28.976 10:35:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:28.976 10:35:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:28.976 10:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:28.976 10:35:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:28.976 10:35:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:28.976 10:35:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:28.976 10:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:28.976 10:35:55 -- common/autotest_common.sh@10 -- # set +x 00:10:28.976 [2024-07-24 10:35:55.577738] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117466 has claimed it. 00:10:28.976 request: 00:10:28.976 { 00:10:28.976 "method": "framework_enable_cpumask_locks", 00:10:28.976 "req_id": 1 00:10:28.976 } 00:10:28.976 Got JSON-RPC error response 00:10:28.976 response: 00:10:28.976 { 00:10:28.976 "code": -32603, 00:10:28.976 "message": "Failed to claim CPU core: 2" 00:10:28.976 } 00:10:28.976 10:35:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:28.976 10:35:55 -- common/autotest_common.sh@643 -- # es=1 00:10:28.976 10:35:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:28.976 10:35:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:28.976 10:35:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:28.976 10:35:55 -- event/cpu_locks.sh@158 -- # waitforlisten 117466 /var/tmp/spdk.sock 00:10:28.976 10:35:55 -- common/autotest_common.sh@819 -- # '[' -z 117466 ']' 00:10:28.976 10:35:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.976 10:35:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:28.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.976 10:35:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.976 10:35:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:28.976 10:35:55 -- common/autotest_common.sh@10 -- # set +x 00:10:29.234 10:35:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:29.234 10:35:55 -- common/autotest_common.sh@852 -- # return 0 00:10:29.234 10:35:55 -- event/cpu_locks.sh@159 -- # waitforlisten 117489 /var/tmp/spdk2.sock 00:10:29.234 10:35:55 -- common/autotest_common.sh@819 -- # '[' -z 117489 ']' 00:10:29.234 10:35:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:29.234 10:35:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:29.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:29.234 10:35:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:29.234 10:35:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:29.234 10:35:55 -- common/autotest_common.sh@10 -- # set +x 00:10:29.493 10:35:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:29.493 10:35:56 -- common/autotest_common.sh@852 -- # return 0 00:10:29.493 10:35:56 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:29.493 10:35:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:29.493 10:35:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:29.493 10:35:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:29.493 00:10:29.493 real 0m2.593s 00:10:29.493 user 0m1.332s 00:10:29.493 sys 0m0.217s 00:10:29.493 10:35:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.493 ************************************ 00:10:29.493 10:35:56 -- common/autotest_common.sh@10 -- # set +x 00:10:29.493 END TEST locking_overlapped_coremask_via_rpc 00:10:29.493 ************************************ 00:10:29.493 10:35:56 -- event/cpu_locks.sh@174 -- # cleanup 00:10:29.493 10:35:56 -- event/cpu_locks.sh@15 -- # [[ -z 117466 ]] 00:10:29.493 10:35:56 -- event/cpu_locks.sh@15 -- # killprocess 117466 00:10:29.493 10:35:56 -- common/autotest_common.sh@926 -- # '[' -z 117466 ']' 00:10:29.493 10:35:56 -- common/autotest_common.sh@930 -- # kill -0 117466 00:10:29.493 10:35:56 -- common/autotest_common.sh@931 -- # uname 00:10:29.493 10:35:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:29.493 10:35:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117466 00:10:29.493 10:35:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:29.493 10:35:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:29.493 10:35:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117466' 00:10:29.493 killing process with pid 117466 00:10:29.493 10:35:56 -- common/autotest_common.sh@945 -- # kill 117466 00:10:29.493 10:35:56 -- common/autotest_common.sh@950 -- # wait 117466 00:10:30.060 10:35:56 -- event/cpu_locks.sh@16 -- # [[ -z 117489 ]] 00:10:30.060 10:35:56 -- event/cpu_locks.sh@16 -- # killprocess 117489 00:10:30.060 10:35:56 -- common/autotest_common.sh@926 -- # '[' -z 117489 ']' 00:10:30.060 10:35:56 -- common/autotest_common.sh@930 -- # kill -0 117489 00:10:30.060 10:35:56 -- common/autotest_common.sh@931 -- # uname 00:10:30.060 10:35:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:30.060 10:35:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117489 00:10:30.060 10:35:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:30.060 10:35:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:30.060 10:35:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117489' 00:10:30.060 killing process with pid 117489 00:10:30.060 10:35:56 -- common/autotest_common.sh@945 -- # kill 117489 00:10:30.060 10:35:56 -- common/autotest_common.sh@950 -- # wait 117489 00:10:30.626 10:35:57 -- event/cpu_locks.sh@18 -- # rm -f 00:10:30.626 10:35:57 -- event/cpu_locks.sh@1 -- # cleanup 00:10:30.626 10:35:57 -- event/cpu_locks.sh@15 -- # [[ -z 117466 ]] 00:10:30.626 10:35:57 -- event/cpu_locks.sh@15 -- # killprocess 117466 00:10:30.626 10:35:57 -- common/autotest_common.sh@926 -- # '[' -z 117466 ']' 00:10:30.626 10:35:57 -- common/autotest_common.sh@930 -- # kill -0 117466 00:10:30.626 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (117466) - No such process 00:10:30.626 10:35:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 117466 is not found' 00:10:30.626 Process with pid 117466 is not found 00:10:30.626 10:35:57 -- event/cpu_locks.sh@16 -- # [[ -z 117489 ]] 00:10:30.626 10:35:57 -- event/cpu_locks.sh@16 -- # killprocess 117489 00:10:30.626 10:35:57 -- common/autotest_common.sh@926 -- # '[' -z 117489 ']' 00:10:30.626 10:35:57 -- common/autotest_common.sh@930 -- # kill -0 117489 00:10:30.626 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (117489) - No such process 00:10:30.626 Process with pid 117489 is not found 00:10:30.626 10:35:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 117489 is not found' 00:10:30.626 10:35:57 -- event/cpu_locks.sh@18 -- # rm -f 00:10:30.626 00:10:30.626 real 0m20.644s 00:10:30.626 user 0m35.107s 00:10:30.626 sys 0m5.870s 00:10:30.626 10:35:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.626 10:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:30.626 ************************************ 00:10:30.626 END TEST cpu_locks 00:10:30.626 ************************************ 00:10:30.626 00:10:30.626 real 0m49.044s 00:10:30.626 user 1m33.310s 00:10:30.626 sys 0m9.687s 00:10:30.626 10:35:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.626 10:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:30.626 ************************************ 00:10:30.626 END TEST event 00:10:30.626 ************************************ 00:10:30.626 10:35:57 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:30.626 10:35:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:30.626 10:35:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.626 10:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:30.626 ************************************ 00:10:30.626 START TEST thread 00:10:30.626 ************************************ 00:10:30.626 10:35:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:30.626 * Looking for test storage... 00:10:30.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:30.626 10:35:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:30.626 10:35:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:30.626 10:35:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.626 10:35:57 -- common/autotest_common.sh@10 -- # set +x 00:10:30.626 ************************************ 00:10:30.626 START TEST thread_poller_perf 00:10:30.626 ************************************ 00:10:30.626 10:35:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:30.626 [2024-07-24 10:35:57.249489] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:30.626 [2024-07-24 10:35:57.249757] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117628 ] 00:10:30.884 [2024-07-24 10:35:57.399292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.884 [2024-07-24 10:35:57.487239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.884 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:32.259 ====================================== 00:10:32.259 busy:2213713258 (cyc) 00:10:32.259 total_run_count: 294000 00:10:32.259 tsc_hz: 2200000000 (cyc) 00:10:32.259 ====================================== 00:10:32.259 poller_cost: 7529 (cyc), 3422 (nsec) 00:10:32.259 ************************************ 00:10:32.259 END TEST thread_poller_perf 00:10:32.259 ************************************ 00:10:32.259 00:10:32.259 real 0m1.380s 00:10:32.259 user 0m1.183s 00:10:32.259 sys 0m0.096s 00:10:32.259 10:35:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.259 10:35:58 -- common/autotest_common.sh@10 -- # set +x 00:10:32.259 10:35:58 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:32.259 10:35:58 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:32.259 10:35:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.259 10:35:58 -- common/autotest_common.sh@10 -- # set +x 00:10:32.259 ************************************ 00:10:32.259 START TEST thread_poller_perf 00:10:32.259 ************************************ 00:10:32.259 10:35:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:32.259 [2024-07-24 10:35:58.674873] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:32.259 [2024-07-24 10:35:58.675079] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117666 ] 00:10:32.259 [2024-07-24 10:35:58.818869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.259 [2024-07-24 10:35:58.899826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.259 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:33.635 ====================================== 00:10:33.635 busy:2205467617 (cyc) 00:10:33.635 total_run_count: 3680000 00:10:33.635 tsc_hz: 2200000000 (cyc) 00:10:33.635 ====================================== 00:10:33.635 poller_cost: 599 (cyc), 272 (nsec) 00:10:33.635 00:10:33.635 real 0m1.355s 00:10:33.635 user 0m1.166s 00:10:33.635 sys 0m0.089s 00:10:33.635 10:36:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.635 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:10:33.635 ************************************ 00:10:33.635 END TEST thread_poller_perf 00:10:33.635 ************************************ 00:10:33.635 10:36:00 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:33.635 10:36:00 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:33.635 10:36:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:33.635 10:36:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:33.635 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:10:33.635 ************************************ 00:10:33.635 START TEST thread_spdk_lock 00:10:33.635 ************************************ 00:10:33.635 10:36:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:33.635 [2024-07-24 10:36:00.080087] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:33.635 [2024-07-24 10:36:00.080343] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117709 ] 00:10:33.635 [2024-07-24 10:36:00.230959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:33.893 [2024-07-24 10:36:00.326935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.893 [2024-07-24 10:36:00.326946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.460 [2024-07-24 10:36:00.845850] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:34.460 [2024-07-24 10:36:00.845967] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:34.460 [2024-07-24 10:36:00.846010] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5592bf23e980 00:10:34.460 [2024-07-24 10:36:00.847404] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:34.460 [2024-07-24 10:36:00.847522] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:34.460 [2024-07-24 10:36:00.847621] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:34.460 Starting test contend 00:10:34.460 Worker Delay Wait us Hold us Total us 00:10:34.460 0 3 109788 192917 302706 00:10:34.460 1 5 61483 292471 353954 00:10:34.460 PASS test contend 00:10:34.460 Starting test hold_by_poller 00:10:34.460 PASS test hold_by_poller 00:10:34.460 Starting test hold_by_message 00:10:34.460 PASS test hold_by_message 00:10:34.460 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:34.460 100014 assertions passed 00:10:34.460 0 assertions failed 00:10:34.460 00:10:34.460 real 0m0.898s 00:10:34.460 user 0m1.232s 00:10:34.460 sys 0m0.088s 00:10:34.460 10:36:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.460 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:10:34.460 ************************************ 00:10:34.460 END TEST thread_spdk_lock 00:10:34.460 ************************************ 00:10:34.460 00:10:34.460 real 0m3.850s 00:10:34.460 user 0m3.687s 00:10:34.460 sys 0m0.386s 00:10:34.460 10:36:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.460 10:36:00 -- common/autotest_common.sh@10 -- # set +x 00:10:34.460 ************************************ 00:10:34.460 END TEST thread 00:10:34.460 ************************************ 00:10:34.460 10:36:01 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:34.460 10:36:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:34.460 10:36:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.460 10:36:01 -- common/autotest_common.sh@10 -- # set +x 00:10:34.460 ************************************ 00:10:34.460 START TEST accel 00:10:34.460 ************************************ 00:10:34.460 10:36:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:34.460 * Looking for test storage... 00:10:34.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:34.460 10:36:01 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:34.460 10:36:01 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:34.460 10:36:01 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:34.460 10:36:01 -- accel/accel.sh@59 -- # spdk_tgt_pid=117787 00:10:34.460 10:36:01 -- accel/accel.sh@60 -- # waitforlisten 117787 00:10:34.460 10:36:01 -- common/autotest_common.sh@819 -- # '[' -z 117787 ']' 00:10:34.460 10:36:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.460 10:36:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:34.460 10:36:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.460 10:36:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:34.460 10:36:01 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:34.460 10:36:01 -- common/autotest_common.sh@10 -- # set +x 00:10:34.461 10:36:01 -- accel/accel.sh@58 -- # build_accel_config 00:10:34.461 10:36:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.461 10:36:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.461 10:36:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.461 10:36:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.461 10:36:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.461 10:36:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.461 10:36:01 -- accel/accel.sh@42 -- # jq -r . 00:10:34.719 [2024-07-24 10:36:01.172529] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:34.719 [2024-07-24 10:36:01.172756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117787 ] 00:10:34.719 [2024-07-24 10:36:01.319905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.977 [2024-07-24 10:36:01.412247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:34.977 [2024-07-24 10:36:01.412537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.543 10:36:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:35.543 10:36:02 -- common/autotest_common.sh@852 -- # return 0 00:10:35.543 10:36:02 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:35.543 10:36:02 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:35.543 10:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:35.543 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:10:35.543 10:36:02 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:35.543 10:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # IFS== 00:10:35.543 10:36:02 -- accel/accel.sh@64 -- # read -r opc module 00:10:35.543 10:36:02 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:35.543 10:36:02 -- accel/accel.sh@67 -- # killprocess 117787 00:10:35.543 10:36:02 -- common/autotest_common.sh@926 -- # '[' -z 117787 ']' 00:10:35.543 10:36:02 -- common/autotest_common.sh@930 -- # kill -0 117787 00:10:35.543 10:36:02 -- common/autotest_common.sh@931 -- # uname 00:10:35.543 10:36:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:35.543 10:36:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117787 00:10:35.543 10:36:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:35.543 killing process with pid 117787 00:10:35.543 10:36:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:35.543 10:36:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117787' 00:10:35.543 10:36:02 -- common/autotest_common.sh@945 -- # kill 117787 00:10:35.543 10:36:02 -- common/autotest_common.sh@950 -- # wait 117787 00:10:36.133 10:36:02 -- accel/accel.sh@68 -- # trap - ERR 00:10:36.133 10:36:02 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:36.133 10:36:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:36.133 10:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.133 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:10:36.133 10:36:02 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:36.133 10:36:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:36.133 10:36:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.133 10:36:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.133 10:36:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.133 10:36:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.133 10:36:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.133 10:36:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.133 10:36:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.133 10:36:02 -- accel/accel.sh@42 -- # jq -r . 00:10:36.133 10:36:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.133 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:10:36.133 10:36:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:36.133 10:36:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:36.133 10:36:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.133 10:36:02 -- common/autotest_common.sh@10 -- # set +x 00:10:36.133 ************************************ 00:10:36.133 START TEST accel_missing_filename 00:10:36.133 ************************************ 00:10:36.133 10:36:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:36.133 10:36:02 -- common/autotest_common.sh@640 -- # local es=0 00:10:36.133 10:36:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:36.133 10:36:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:36.133 10:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:36.133 10:36:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:36.133 10:36:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:36.133 10:36:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:36.133 10:36:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:36.133 10:36:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.133 10:36:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.133 10:36:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.133 10:36:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.133 10:36:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.133 10:36:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.133 10:36:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.133 10:36:02 -- accel/accel.sh@42 -- # jq -r . 00:10:36.133 [2024-07-24 10:36:02.794636] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:36.133 [2024-07-24 10:36:02.794905] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117848 ] 00:10:36.392 [2024-07-24 10:36:02.946131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.392 [2024-07-24 10:36:03.032289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.650 [2024-07-24 10:36:03.087139] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:36.650 [2024-07-24 10:36:03.172452] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:36.650 A filename is required. 00:10:36.650 10:36:03 -- common/autotest_common.sh@643 -- # es=234 00:10:36.650 10:36:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:36.650 10:36:03 -- common/autotest_common.sh@652 -- # es=106 00:10:36.650 10:36:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:36.650 10:36:03 -- common/autotest_common.sh@660 -- # es=1 00:10:36.650 10:36:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:36.650 00:10:36.650 real 0m0.513s 00:10:36.650 user 0m0.274s 00:10:36.650 sys 0m0.188s 00:10:36.650 10:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.650 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:10:36.650 ************************************ 00:10:36.650 END TEST accel_missing_filename 00:10:36.650 ************************************ 00:10:36.650 10:36:03 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.650 10:36:03 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:36.650 10:36:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:36.650 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:10:36.650 ************************************ 00:10:36.650 START TEST accel_compress_verify 00:10:36.650 ************************************ 00:10:36.650 10:36:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.650 10:36:03 -- common/autotest_common.sh@640 -- # local es=0 00:10:36.650 10:36:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.650 10:36:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:36.650 10:36:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:36.650 10:36:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:36.650 10:36:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:36.650 10:36:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.907 10:36:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:36.907 10:36:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:36.907 10:36:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:36.907 10:36:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.907 10:36:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.907 10:36:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:36.907 10:36:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:36.907 10:36:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:36.907 10:36:03 -- accel/accel.sh@42 -- # jq -r . 00:10:36.907 [2024-07-24 10:36:03.352279] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:36.907 [2024-07-24 10:36:03.352478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117880 ] 00:10:36.907 [2024-07-24 10:36:03.495299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.907 [2024-07-24 10:36:03.583911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.165 [2024-07-24 10:36:03.638334] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:37.165 [2024-07-24 10:36:03.720685] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:37.165 00:10:37.165 Compression does not support the verify option, aborting. 00:10:37.165 10:36:03 -- common/autotest_common.sh@643 -- # es=161 00:10:37.165 10:36:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:37.165 10:36:03 -- common/autotest_common.sh@652 -- # es=33 00:10:37.165 10:36:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:37.165 10:36:03 -- common/autotest_common.sh@660 -- # es=1 00:10:37.165 10:36:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:37.165 00:10:37.165 real 0m0.501s 00:10:37.165 user 0m0.310s 00:10:37.165 sys 0m0.138s 00:10:37.165 10:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.165 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.165 ************************************ 00:10:37.165 END TEST accel_compress_verify 00:10:37.165 ************************************ 00:10:37.424 10:36:03 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:37.424 10:36:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:37.424 10:36:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.424 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.424 ************************************ 00:10:37.424 START TEST accel_wrong_workload 00:10:37.424 ************************************ 00:10:37.424 10:36:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:37.424 10:36:03 -- common/autotest_common.sh@640 -- # local es=0 00:10:37.424 10:36:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:37.424 10:36:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:37.424 10:36:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.424 10:36:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:37.424 10:36:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.424 10:36:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:37.424 10:36:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:37.424 10:36:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.424 10:36:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.424 10:36:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.424 10:36:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.424 10:36:03 -- accel/accel.sh@42 -- # jq -r . 00:10:37.424 Unsupported workload type: foobar 00:10:37.424 [2024-07-24 10:36:03.902926] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:37.424 accel_perf options: 00:10:37.424 [-h help message] 00:10:37.424 [-q queue depth per core] 00:10:37.424 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:37.424 [-T number of threads per core 00:10:37.424 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:37.424 [-t time in seconds] 00:10:37.424 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:37.424 [ dif_verify, , dif_generate, dif_generate_copy 00:10:37.424 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:37.424 [-l for compress/decompress workloads, name of uncompressed input file 00:10:37.424 [-S for crc32c workload, use this seed value (default 0) 00:10:37.424 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:37.424 [-f for fill workload, use this BYTE value (default 255) 00:10:37.424 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:37.424 [-y verify result if this switch is on] 00:10:37.424 [-a tasks to allocate per core (default: same value as -q)] 00:10:37.424 Can be used to spread operations across a wider range of memory. 00:10:37.424 10:36:03 -- common/autotest_common.sh@643 -- # es=1 00:10:37.424 10:36:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:37.424 10:36:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:37.424 10:36:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:37.424 00:10:37.424 real 0m0.049s 00:10:37.424 user 0m0.030s 00:10:37.424 sys 0m0.019s 00:10:37.424 10:36:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.424 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.424 ************************************ 00:10:37.424 END TEST accel_wrong_workload 00:10:37.424 ************************************ 00:10:37.424 10:36:03 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:37.424 10:36:03 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:37.424 10:36:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.424 10:36:03 -- common/autotest_common.sh@10 -- # set +x 00:10:37.424 ************************************ 00:10:37.424 START TEST accel_negative_buffers 00:10:37.424 ************************************ 00:10:37.424 10:36:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:37.424 10:36:03 -- common/autotest_common.sh@640 -- # local es=0 00:10:37.424 10:36:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:37.424 10:36:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:37.424 10:36:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.424 10:36:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:37.424 10:36:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:37.424 10:36:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:37.424 10:36:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:37.424 10:36:03 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.424 10:36:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.424 10:36:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.424 10:36:03 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.424 10:36:03 -- accel/accel.sh@42 -- # jq -r . 00:10:37.424 -x option must be non-negative. 00:10:37.424 [2024-07-24 10:36:04.003346] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:37.424 accel_perf options: 00:10:37.424 [-h help message] 00:10:37.424 [-q queue depth per core] 00:10:37.424 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:37.424 [-T number of threads per core 00:10:37.424 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:37.424 [-t time in seconds] 00:10:37.424 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:37.424 [ dif_verify, , dif_generate, dif_generate_copy 00:10:37.424 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:37.424 [-l for compress/decompress workloads, name of uncompressed input file 00:10:37.424 [-S for crc32c workload, use this seed value (default 0) 00:10:37.424 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:37.424 [-f for fill workload, use this BYTE value (default 255) 00:10:37.424 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:37.424 [-y verify result if this switch is on] 00:10:37.424 [-a tasks to allocate per core (default: same value as -q)] 00:10:37.424 Can be used to spread operations across a wider range of memory. 00:10:37.424 10:36:04 -- common/autotest_common.sh@643 -- # es=1 00:10:37.424 10:36:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:37.424 10:36:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:37.424 10:36:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:37.424 00:10:37.424 real 0m0.050s 00:10:37.424 user 0m0.028s 00:10:37.424 sys 0m0.023s 00:10:37.424 10:36:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.424 10:36:04 -- common/autotest_common.sh@10 -- # set +x 00:10:37.424 ************************************ 00:10:37.424 END TEST accel_negative_buffers 00:10:37.424 ************************************ 00:10:37.424 10:36:04 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:37.424 10:36:04 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:37.424 10:36:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.424 10:36:04 -- common/autotest_common.sh@10 -- # set +x 00:10:37.424 ************************************ 00:10:37.424 START TEST accel_crc32c 00:10:37.424 ************************************ 00:10:37.424 10:36:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:37.424 10:36:04 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.424 10:36:04 -- accel/accel.sh@17 -- # local accel_module 00:10:37.424 10:36:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:37.424 10:36:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:37.424 10:36:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.424 10:36:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.424 10:36:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.424 10:36:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.424 10:36:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.424 10:36:04 -- accel/accel.sh@42 -- # jq -r . 00:10:37.424 [2024-07-24 10:36:04.095847] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:37.425 [2024-07-24 10:36:04.096104] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117952 ] 00:10:37.683 [2024-07-24 10:36:04.247169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.683 [2024-07-24 10:36:04.331628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.054 10:36:05 -- accel/accel.sh@18 -- # out=' 00:10:39.054 SPDK Configuration: 00:10:39.054 Core mask: 0x1 00:10:39.054 00:10:39.054 Accel Perf Configuration: 00:10:39.054 Workload Type: crc32c 00:10:39.054 CRC-32C seed: 32 00:10:39.054 Transfer size: 4096 bytes 00:10:39.054 Vector count 1 00:10:39.054 Module: software 00:10:39.054 Queue depth: 32 00:10:39.054 Allocate depth: 32 00:10:39.054 # threads/core: 1 00:10:39.054 Run time: 1 seconds 00:10:39.054 Verify: Yes 00:10:39.054 00:10:39.054 Running for 1 seconds... 00:10:39.054 00:10:39.054 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:39.054 ------------------------------------------------------------------------------------ 00:10:39.054 0,0 410656/s 1604 MiB/s 0 0 00:10:39.054 ==================================================================================== 00:10:39.054 Total 410656/s 1604 MiB/s 0 0' 00:10:39.054 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.054 10:36:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:39.054 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.054 10:36:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:39.054 10:36:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.054 10:36:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.054 10:36:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.054 10:36:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.054 10:36:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.054 10:36:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.054 10:36:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.054 10:36:05 -- accel/accel.sh@42 -- # jq -r . 00:10:39.055 [2024-07-24 10:36:05.608212] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:39.055 [2024-07-24 10:36:05.608488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117984 ] 00:10:39.312 [2024-07-24 10:36:05.755740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.312 [2024-07-24 10:36:05.857369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=0x1 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=crc32c 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=32 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=software 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@23 -- # accel_module=software 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=32 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=32 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=1 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val=Yes 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.312 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:39.312 10:36:05 -- accel/accel.sh@21 -- # val= 00:10:39.312 10:36:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.313 10:36:05 -- accel/accel.sh@20 -- # IFS=: 00:10:39.313 10:36:05 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@21 -- # val= 00:10:40.724 10:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # IFS=: 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@21 -- # val= 00:10:40.724 10:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # IFS=: 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@21 -- # val= 00:10:40.724 10:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # IFS=: 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@21 -- # val= 00:10:40.724 10:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # IFS=: 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@21 -- # val= 00:10:40.724 10:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # IFS=: 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@21 -- # val= 00:10:40.724 10:36:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # IFS=: 00:10:40.724 10:36:07 -- accel/accel.sh@20 -- # read -r var val 00:10:40.724 10:36:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:40.724 10:36:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:40.724 10:36:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:40.724 00:10:40.724 real 0m3.048s 00:10:40.724 user 0m2.607s 00:10:40.724 sys 0m0.283s 00:10:40.724 10:36:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.724 10:36:07 -- common/autotest_common.sh@10 -- # set +x 00:10:40.724 ************************************ 00:10:40.724 END TEST accel_crc32c 00:10:40.724 ************************************ 00:10:40.724 10:36:07 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:40.724 10:36:07 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:40.724 10:36:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.724 10:36:07 -- common/autotest_common.sh@10 -- # set +x 00:10:40.724 ************************************ 00:10:40.724 START TEST accel_crc32c_C2 00:10:40.724 ************************************ 00:10:40.724 10:36:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:40.724 10:36:07 -- accel/accel.sh@16 -- # local accel_opc 00:10:40.724 10:36:07 -- accel/accel.sh@17 -- # local accel_module 00:10:40.724 10:36:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:40.724 10:36:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:40.724 10:36:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.724 10:36:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.724 10:36:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.724 10:36:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.724 10:36:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.724 10:36:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.724 10:36:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.724 10:36:07 -- accel/accel.sh@42 -- # jq -r . 00:10:40.724 [2024-07-24 10:36:07.192344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:40.724 [2024-07-24 10:36:07.192598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118026 ] 00:10:40.724 [2024-07-24 10:36:07.339442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.009 [2024-07-24 10:36:07.422829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.382 10:36:08 -- accel/accel.sh@18 -- # out=' 00:10:42.382 SPDK Configuration: 00:10:42.382 Core mask: 0x1 00:10:42.382 00:10:42.382 Accel Perf Configuration: 00:10:42.382 Workload Type: crc32c 00:10:42.382 CRC-32C seed: 0 00:10:42.382 Transfer size: 4096 bytes 00:10:42.382 Vector count 2 00:10:42.382 Module: software 00:10:42.382 Queue depth: 32 00:10:42.382 Allocate depth: 32 00:10:42.382 # threads/core: 1 00:10:42.382 Run time: 1 seconds 00:10:42.382 Verify: Yes 00:10:42.382 00:10:42.382 Running for 1 seconds... 00:10:42.382 00:10:42.382 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:42.382 ------------------------------------------------------------------------------------ 00:10:42.382 0,0 319168/s 2493 MiB/s 0 0 00:10:42.382 ==================================================================================== 00:10:42.382 Total 319168/s 1246 MiB/s 0 0' 00:10:42.382 10:36:08 -- accel/accel.sh@20 -- # IFS=: 00:10:42.382 10:36:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:42.382 10:36:08 -- accel/accel.sh@20 -- # read -r var val 00:10:42.382 10:36:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:42.382 10:36:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.382 10:36:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.382 10:36:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.382 10:36:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.382 10:36:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.382 10:36:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.382 10:36:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.382 10:36:08 -- accel/accel.sh@42 -- # jq -r . 00:10:42.382 [2024-07-24 10:36:08.775618] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:42.382 [2024-07-24 10:36:08.776027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118052 ] 00:10:42.382 [2024-07-24 10:36:08.930398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.640 [2024-07-24 10:36:09.067081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=0x1 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=crc32c 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=0 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=software 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@23 -- # accel_module=software 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=32 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=32 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=1 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val=Yes 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:42.640 10:36:09 -- accel/accel.sh@21 -- # val= 00:10:42.640 10:36:09 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.640 10:36:09 -- accel/accel.sh@20 -- # IFS=: 00:10:42.641 10:36:09 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@21 -- # val= 00:10:44.015 10:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # IFS=: 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@21 -- # val= 00:10:44.015 10:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # IFS=: 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@21 -- # val= 00:10:44.015 10:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # IFS=: 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@21 -- # val= 00:10:44.015 10:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # IFS=: 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@21 -- # val= 00:10:44.015 10:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # IFS=: 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@21 -- # val= 00:10:44.015 10:36:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # IFS=: 00:10:44.015 10:36:10 -- accel/accel.sh@20 -- # read -r var val 00:10:44.015 10:36:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:44.015 10:36:10 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:44.015 10:36:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:44.015 00:10:44.015 real 0m3.244s 00:10:44.015 user 0m2.725s 00:10:44.015 sys 0m0.356s 00:10:44.015 10:36:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.015 10:36:10 -- common/autotest_common.sh@10 -- # set +x 00:10:44.015 ************************************ 00:10:44.015 END TEST accel_crc32c_C2 00:10:44.015 ************************************ 00:10:44.015 10:36:10 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:44.015 10:36:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:44.015 10:36:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.015 10:36:10 -- common/autotest_common.sh@10 -- # set +x 00:10:44.015 ************************************ 00:10:44.015 START TEST accel_copy 00:10:44.015 ************************************ 00:10:44.015 10:36:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:44.015 10:36:10 -- accel/accel.sh@16 -- # local accel_opc 00:10:44.015 10:36:10 -- accel/accel.sh@17 -- # local accel_module 00:10:44.015 10:36:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:44.015 10:36:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.015 10:36:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:44.015 10:36:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.015 10:36:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.015 10:36:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.015 10:36:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.015 10:36:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.015 10:36:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.015 10:36:10 -- accel/accel.sh@42 -- # jq -r . 00:10:44.015 [2024-07-24 10:36:10.482181] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:44.015 [2024-07-24 10:36:10.482464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118097 ] 00:10:44.016 [2024-07-24 10:36:10.629181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.274 [2024-07-24 10:36:10.750722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.658 10:36:12 -- accel/accel.sh@18 -- # out=' 00:10:45.658 SPDK Configuration: 00:10:45.658 Core mask: 0x1 00:10:45.658 00:10:45.658 Accel Perf Configuration: 00:10:45.658 Workload Type: copy 00:10:45.658 Transfer size: 4096 bytes 00:10:45.658 Vector count 1 00:10:45.658 Module: software 00:10:45.658 Queue depth: 32 00:10:45.658 Allocate depth: 32 00:10:45.658 # threads/core: 1 00:10:45.658 Run time: 1 seconds 00:10:45.658 Verify: Yes 00:10:45.658 00:10:45.658 Running for 1 seconds... 00:10:45.658 00:10:45.658 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:45.658 ------------------------------------------------------------------------------------ 00:10:45.658 0,0 275104/s 1074 MiB/s 0 0 00:10:45.658 ==================================================================================== 00:10:45.658 Total 275104/s 1074 MiB/s 0 0' 00:10:45.658 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.658 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.658 10:36:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:45.658 10:36:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:45.658 10:36:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.658 10:36:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.658 10:36:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.658 10:36:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.658 10:36:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.658 10:36:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.658 10:36:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.658 10:36:12 -- accel/accel.sh@42 -- # jq -r . 00:10:45.658 [2024-07-24 10:36:12.112478] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:45.658 [2024-07-24 10:36:12.112711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118120 ] 00:10:45.658 [2024-07-24 10:36:12.258663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.917 [2024-07-24 10:36:12.384398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=0x1 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=copy 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=software 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@23 -- # accel_module=software 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=32 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=32 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=1 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val=Yes 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:45.917 10:36:12 -- accel/accel.sh@21 -- # val= 00:10:45.917 10:36:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # IFS=: 00:10:45.917 10:36:12 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@21 -- # val= 00:10:47.293 10:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # IFS=: 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@21 -- # val= 00:10:47.293 10:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # IFS=: 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@21 -- # val= 00:10:47.293 10:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # IFS=: 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@21 -- # val= 00:10:47.293 10:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # IFS=: 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@21 -- # val= 00:10:47.293 10:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # IFS=: 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@21 -- # val= 00:10:47.293 10:36:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # IFS=: 00:10:47.293 10:36:13 -- accel/accel.sh@20 -- # read -r var val 00:10:47.293 10:36:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.293 10:36:13 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:47.293 10:36:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.293 00:10:47.293 real 0m3.286s 00:10:47.293 user 0m2.766s 00:10:47.293 sys 0m0.360s 00:10:47.293 10:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.293 10:36:13 -- common/autotest_common.sh@10 -- # set +x 00:10:47.293 ************************************ 00:10:47.293 END TEST accel_copy 00:10:47.293 ************************************ 00:10:47.293 10:36:13 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:47.293 10:36:13 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:47.293 10:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:47.293 10:36:13 -- common/autotest_common.sh@10 -- # set +x 00:10:47.293 ************************************ 00:10:47.293 START TEST accel_fill 00:10:47.293 ************************************ 00:10:47.293 10:36:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:47.293 10:36:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.293 10:36:13 -- accel/accel.sh@17 -- # local accel_module 00:10:47.293 10:36:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:47.293 10:36:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:47.293 10:36:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.293 10:36:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.293 10:36:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.293 10:36:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.293 10:36:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.293 10:36:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.293 10:36:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.293 10:36:13 -- accel/accel.sh@42 -- # jq -r . 00:10:47.294 [2024-07-24 10:36:13.821389] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:47.294 [2024-07-24 10:36:13.821671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118165 ] 00:10:47.294 [2024-07-24 10:36:13.969177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.552 [2024-07-24 10:36:14.094178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.926 10:36:15 -- accel/accel.sh@18 -- # out=' 00:10:48.926 SPDK Configuration: 00:10:48.926 Core mask: 0x1 00:10:48.926 00:10:48.926 Accel Perf Configuration: 00:10:48.926 Workload Type: fill 00:10:48.926 Fill pattern: 0x80 00:10:48.926 Transfer size: 4096 bytes 00:10:48.926 Vector count 1 00:10:48.926 Module: software 00:10:48.926 Queue depth: 64 00:10:48.926 Allocate depth: 64 00:10:48.926 # threads/core: 1 00:10:48.926 Run time: 1 seconds 00:10:48.926 Verify: Yes 00:10:48.926 00:10:48.926 Running for 1 seconds... 00:10:48.926 00:10:48.926 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.926 ------------------------------------------------------------------------------------ 00:10:48.926 0,0 395968/s 1546 MiB/s 0 0 00:10:48.926 ==================================================================================== 00:10:48.926 Total 395968/s 1546 MiB/s 0 0' 00:10:48.926 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:48.926 10:36:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.926 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:48.926 10:36:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.926 10:36:15 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.926 10:36:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.926 10:36:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.926 10:36:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.926 10:36:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.926 10:36:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.926 10:36:15 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.926 10:36:15 -- accel/accel.sh@42 -- # jq -r . 00:10:48.926 [2024-07-24 10:36:15.477444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:48.926 [2024-07-24 10:36:15.477684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118195 ] 00:10:49.184 [2024-07-24 10:36:15.622876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.184 [2024-07-24 10:36:15.766126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=0x1 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=fill 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=0x80 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=software 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=64 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=64 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=1 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val=Yes 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.443 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.443 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.443 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.444 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:49.444 10:36:15 -- accel/accel.sh@21 -- # val= 00:10:49.444 10:36:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.444 10:36:15 -- accel/accel.sh@20 -- # IFS=: 00:10:49.444 10:36:15 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@21 -- # val= 00:10:50.820 10:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@21 -- # val= 00:10:50.820 10:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@21 -- # val= 00:10:50.820 10:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@21 -- # val= 00:10:50.820 10:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@21 -- # val= 00:10:50.820 10:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@21 -- # val= 00:10:50.820 10:36:17 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # IFS=: 00:10:50.820 10:36:17 -- accel/accel.sh@20 -- # read -r var val 00:10:50.820 10:36:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:50.820 10:36:17 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:50.820 10:36:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:50.820 00:10:50.820 real 0m3.336s 00:10:50.820 user 0m2.808s 00:10:50.820 sys 0m0.358s 00:10:50.820 10:36:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:50.820 10:36:17 -- common/autotest_common.sh@10 -- # set +x 00:10:50.820 ************************************ 00:10:50.820 END TEST accel_fill 00:10:50.820 ************************************ 00:10:50.820 10:36:17 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:50.820 10:36:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:50.820 10:36:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:50.820 10:36:17 -- common/autotest_common.sh@10 -- # set +x 00:10:50.820 ************************************ 00:10:50.820 START TEST accel_copy_crc32c 00:10:50.820 ************************************ 00:10:50.820 10:36:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:50.820 10:36:17 -- accel/accel.sh@16 -- # local accel_opc 00:10:50.820 10:36:17 -- accel/accel.sh@17 -- # local accel_module 00:10:50.820 10:36:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:50.820 10:36:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:50.820 10:36:17 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.820 10:36:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.820 10:36:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.820 10:36:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.820 10:36:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.820 10:36:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.820 10:36:17 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.820 10:36:17 -- accel/accel.sh@42 -- # jq -r . 00:10:50.820 [2024-07-24 10:36:17.216942] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:50.820 [2024-07-24 10:36:17.217324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118236 ] 00:10:50.821 [2024-07-24 10:36:17.371104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.821 [2024-07-24 10:36:17.496446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.196 10:36:18 -- accel/accel.sh@18 -- # out=' 00:10:52.196 SPDK Configuration: 00:10:52.196 Core mask: 0x1 00:10:52.196 00:10:52.196 Accel Perf Configuration: 00:10:52.196 Workload Type: copy_crc32c 00:10:52.196 CRC-32C seed: 0 00:10:52.196 Vector size: 4096 bytes 00:10:52.196 Transfer size: 4096 bytes 00:10:52.196 Vector count 1 00:10:52.196 Module: software 00:10:52.196 Queue depth: 32 00:10:52.196 Allocate depth: 32 00:10:52.196 # threads/core: 1 00:10:52.196 Run time: 1 seconds 00:10:52.197 Verify: Yes 00:10:52.197 00:10:52.197 Running for 1 seconds... 00:10:52.197 00:10:52.197 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:52.197 ------------------------------------------------------------------------------------ 00:10:52.197 0,0 217312/s 848 MiB/s 0 0 00:10:52.197 ==================================================================================== 00:10:52.197 Total 217312/s 848 MiB/s 0 0' 00:10:52.197 10:36:18 -- accel/accel.sh@20 -- # IFS=: 00:10:52.197 10:36:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:52.197 10:36:18 -- accel/accel.sh@20 -- # read -r var val 00:10:52.197 10:36:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:52.197 10:36:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.197 10:36:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.197 10:36:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.197 10:36:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.197 10:36:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.197 10:36:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.197 10:36:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.197 10:36:18 -- accel/accel.sh@42 -- # jq -r . 00:10:52.197 [2024-07-24 10:36:18.869423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:52.197 [2024-07-24 10:36:18.869789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118271 ] 00:10:52.455 [2024-07-24 10:36:19.026848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.714 [2024-07-24 10:36:19.178485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=0x1 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=0 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=software 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=32 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=32 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=1 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val=Yes 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:52.714 10:36:19 -- accel/accel.sh@21 -- # val= 00:10:52.714 10:36:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # IFS=: 00:10:52.714 10:36:19 -- accel/accel.sh@20 -- # read -r var val 00:10:54.090 10:36:20 -- accel/accel.sh@21 -- # val= 00:10:54.090 10:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # IFS=: 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # read -r var val 00:10:54.090 10:36:20 -- accel/accel.sh@21 -- # val= 00:10:54.090 10:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # IFS=: 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # read -r var val 00:10:54.090 10:36:20 -- accel/accel.sh@21 -- # val= 00:10:54.090 10:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # IFS=: 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # read -r var val 00:10:54.090 10:36:20 -- accel/accel.sh@21 -- # val= 00:10:54.090 10:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # IFS=: 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # read -r var val 00:10:54.090 10:36:20 -- accel/accel.sh@21 -- # val= 00:10:54.090 10:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # IFS=: 00:10:54.090 10:36:20 -- accel/accel.sh@20 -- # read -r var val 00:10:54.090 10:36:20 -- accel/accel.sh@21 -- # val= 00:10:54.091 10:36:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.091 10:36:20 -- accel/accel.sh@20 -- # IFS=: 00:10:54.091 10:36:20 -- accel/accel.sh@20 -- # read -r var val 00:10:54.091 10:36:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:54.091 10:36:20 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:54.091 10:36:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:54.091 00:10:54.091 real 0m3.358s 00:10:54.091 user 0m2.825s 00:10:54.091 sys 0m0.392s 00:10:54.091 10:36:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.091 10:36:20 -- common/autotest_common.sh@10 -- # set +x 00:10:54.091 ************************************ 00:10:54.091 END TEST accel_copy_crc32c 00:10:54.091 ************************************ 00:10:54.091 10:36:20 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:54.091 10:36:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:54.091 10:36:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:54.091 10:36:20 -- common/autotest_common.sh@10 -- # set +x 00:10:54.091 ************************************ 00:10:54.091 START TEST accel_copy_crc32c_C2 00:10:54.091 ************************************ 00:10:54.091 10:36:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:54.091 10:36:20 -- accel/accel.sh@16 -- # local accel_opc 00:10:54.091 10:36:20 -- accel/accel.sh@17 -- # local accel_module 00:10:54.091 10:36:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:54.091 10:36:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:54.091 10:36:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.091 10:36:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.091 10:36:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.091 10:36:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.091 10:36:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.091 10:36:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.091 10:36:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.091 10:36:20 -- accel/accel.sh@42 -- # jq -r . 00:10:54.091 [2024-07-24 10:36:20.618691] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:54.091 [2024-07-24 10:36:20.618942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118311 ] 00:10:54.091 [2024-07-24 10:36:20.765715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.350 [2024-07-24 10:36:20.883267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.724 10:36:22 -- accel/accel.sh@18 -- # out=' 00:10:55.724 SPDK Configuration: 00:10:55.724 Core mask: 0x1 00:10:55.724 00:10:55.724 Accel Perf Configuration: 00:10:55.724 Workload Type: copy_crc32c 00:10:55.724 CRC-32C seed: 0 00:10:55.724 Vector size: 4096 bytes 00:10:55.724 Transfer size: 8192 bytes 00:10:55.724 Vector count 2 00:10:55.724 Module: software 00:10:55.724 Queue depth: 32 00:10:55.724 Allocate depth: 32 00:10:55.724 # threads/core: 1 00:10:55.724 Run time: 1 seconds 00:10:55.724 Verify: Yes 00:10:55.724 00:10:55.724 Running for 1 seconds... 00:10:55.724 00:10:55.724 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:55.724 ------------------------------------------------------------------------------------ 00:10:55.724 0,0 147744/s 1154 MiB/s 0 0 00:10:55.724 ==================================================================================== 00:10:55.724 Total 147744/s 577 MiB/s 0 0' 00:10:55.724 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.724 10:36:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:55.724 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.724 10:36:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:55.724 10:36:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.724 10:36:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.724 10:36:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.724 10:36:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.724 10:36:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.724 10:36:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.724 10:36:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.724 10:36:22 -- accel/accel.sh@42 -- # jq -r . 00:10:55.724 [2024-07-24 10:36:22.190578] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:55.724 [2024-07-24 10:36:22.190790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118339 ] 00:10:55.724 [2024-07-24 10:36:22.333596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.983 [2024-07-24 10:36:22.441019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=0x1 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=0 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=software 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@23 -- # accel_module=software 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=32 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=32 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=1 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val=Yes 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.983 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.983 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:55.983 10:36:22 -- accel/accel.sh@21 -- # val= 00:10:55.984 10:36:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.984 10:36:22 -- accel/accel.sh@20 -- # IFS=: 00:10:55.984 10:36:22 -- accel/accel.sh@20 -- # read -r var val 00:10:57.414 10:36:23 -- accel/accel.sh@21 -- # val= 00:10:57.414 10:36:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # IFS=: 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # read -r var val 00:10:57.414 10:36:23 -- accel/accel.sh@21 -- # val= 00:10:57.414 10:36:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # IFS=: 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # read -r var val 00:10:57.414 10:36:23 -- accel/accel.sh@21 -- # val= 00:10:57.414 10:36:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # IFS=: 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # read -r var val 00:10:57.414 10:36:23 -- accel/accel.sh@21 -- # val= 00:10:57.414 10:36:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # IFS=: 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # read -r var val 00:10:57.414 10:36:23 -- accel/accel.sh@21 -- # val= 00:10:57.414 10:36:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # IFS=: 00:10:57.414 10:36:23 -- accel/accel.sh@20 -- # read -r var val 00:10:57.414 10:36:23 -- accel/accel.sh@21 -- # val= 00:10:57.415 10:36:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.415 10:36:23 -- accel/accel.sh@20 -- # IFS=: 00:10:57.415 10:36:23 -- accel/accel.sh@20 -- # read -r var val 00:10:57.415 10:36:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:57.415 10:36:23 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:57.415 10:36:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.415 00:10:57.415 real 0m3.121s 00:10:57.415 user 0m2.611s 00:10:57.415 sys 0m0.348s 00:10:57.415 10:36:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.415 ************************************ 00:10:57.415 END TEST accel_copy_crc32c_C2 00:10:57.415 ************************************ 00:10:57.415 10:36:23 -- common/autotest_common.sh@10 -- # set +x 00:10:57.415 10:36:23 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:57.415 10:36:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:57.415 10:36:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:57.415 10:36:23 -- common/autotest_common.sh@10 -- # set +x 00:10:57.415 ************************************ 00:10:57.415 START TEST accel_dualcast 00:10:57.415 ************************************ 00:10:57.415 10:36:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:57.415 10:36:23 -- accel/accel.sh@16 -- # local accel_opc 00:10:57.415 10:36:23 -- accel/accel.sh@17 -- # local accel_module 00:10:57.415 10:36:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:57.415 10:36:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:57.415 10:36:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.415 10:36:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.415 10:36:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.415 10:36:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.415 10:36:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.415 10:36:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.415 10:36:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.415 10:36:23 -- accel/accel.sh@42 -- # jq -r . 00:10:57.415 [2024-07-24 10:36:23.795732] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:57.415 [2024-07-24 10:36:23.796002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118379 ] 00:10:57.415 [2024-07-24 10:36:23.942951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.415 [2024-07-24 10:36:24.030190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.790 10:36:25 -- accel/accel.sh@18 -- # out=' 00:10:58.790 SPDK Configuration: 00:10:58.790 Core mask: 0x1 00:10:58.790 00:10:58.790 Accel Perf Configuration: 00:10:58.790 Workload Type: dualcast 00:10:58.790 Transfer size: 4096 bytes 00:10:58.790 Vector count 1 00:10:58.790 Module: software 00:10:58.790 Queue depth: 32 00:10:58.790 Allocate depth: 32 00:10:58.790 # threads/core: 1 00:10:58.790 Run time: 1 seconds 00:10:58.790 Verify: Yes 00:10:58.790 00:10:58.790 Running for 1 seconds... 00:10:58.790 00:10:58.790 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.790 ------------------------------------------------------------------------------------ 00:10:58.790 0,0 265792/s 1038 MiB/s 0 0 00:10:58.790 ==================================================================================== 00:10:58.790 Total 265792/s 1038 MiB/s 0 0' 00:10:58.790 10:36:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:58.790 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:58.790 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:58.790 10:36:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:58.790 10:36:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.790 10:36:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.790 10:36:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.790 10:36:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.790 10:36:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.790 10:36:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.790 10:36:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.790 10:36:25 -- accel/accel.sh@42 -- # jq -r . 00:10:58.790 [2024-07-24 10:36:25.311983] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:10:58.790 [2024-07-24 10:36:25.312248] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118406 ] 00:10:58.790 [2024-07-24 10:36:25.461472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.049 [2024-07-24 10:36:25.564954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=0x1 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=dualcast 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=software 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@23 -- # accel_module=software 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=32 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=32 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=1 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val=Yes 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:10:59.049 10:36:25 -- accel/accel.sh@21 -- # val= 00:10:59.049 10:36:25 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # IFS=: 00:10:59.049 10:36:25 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@21 -- # val= 00:11:00.425 10:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # IFS=: 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@21 -- # val= 00:11:00.425 10:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # IFS=: 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@21 -- # val= 00:11:00.425 10:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # IFS=: 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@21 -- # val= 00:11:00.425 10:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # IFS=: 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@21 -- # val= 00:11:00.425 10:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # IFS=: 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@21 -- # val= 00:11:00.425 10:36:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # IFS=: 00:11:00.425 10:36:26 -- accel/accel.sh@20 -- # read -r var val 00:11:00.425 10:36:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:00.425 10:36:26 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:00.425 10:36:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.425 00:11:00.425 real 0m3.065s 00:11:00.425 user 0m2.589s 00:11:00.425 sys 0m0.293s 00:11:00.425 10:36:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.425 10:36:26 -- common/autotest_common.sh@10 -- # set +x 00:11:00.425 ************************************ 00:11:00.425 END TEST accel_dualcast 00:11:00.425 ************************************ 00:11:00.425 10:36:26 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:00.425 10:36:26 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:00.425 10:36:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.425 10:36:26 -- common/autotest_common.sh@10 -- # set +x 00:11:00.425 ************************************ 00:11:00.425 START TEST accel_compare 00:11:00.425 ************************************ 00:11:00.425 10:36:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:00.425 10:36:26 -- accel/accel.sh@16 -- # local accel_opc 00:11:00.425 10:36:26 -- accel/accel.sh@17 -- # local accel_module 00:11:00.425 10:36:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:00.425 10:36:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:00.425 10:36:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.425 10:36:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.425 10:36:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.425 10:36:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.425 10:36:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.425 10:36:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.425 10:36:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.425 10:36:26 -- accel/accel.sh@42 -- # jq -r . 00:11:00.425 [2024-07-24 10:36:26.909097] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:00.425 [2024-07-24 10:36:26.909295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118449 ] 00:11:00.425 [2024-07-24 10:36:27.054063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.684 [2024-07-24 10:36:27.132572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.058 10:36:28 -- accel/accel.sh@18 -- # out=' 00:11:02.058 SPDK Configuration: 00:11:02.058 Core mask: 0x1 00:11:02.058 00:11:02.058 Accel Perf Configuration: 00:11:02.058 Workload Type: compare 00:11:02.058 Transfer size: 4096 bytes 00:11:02.058 Vector count 1 00:11:02.058 Module: software 00:11:02.058 Queue depth: 32 00:11:02.058 Allocate depth: 32 00:11:02.058 # threads/core: 1 00:11:02.058 Run time: 1 seconds 00:11:02.058 Verify: Yes 00:11:02.058 00:11:02.058 Running for 1 seconds... 00:11:02.058 00:11:02.058 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:02.058 ------------------------------------------------------------------------------------ 00:11:02.058 0,0 372416/s 1454 MiB/s 0 0 00:11:02.058 ==================================================================================== 00:11:02.058 Total 372416/s 1454 MiB/s 0 0' 00:11:02.058 10:36:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:02.058 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.058 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.058 10:36:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:02.058 10:36:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.058 10:36:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.058 10:36:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.058 10:36:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.058 10:36:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.058 10:36:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.058 10:36:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.058 10:36:28 -- accel/accel.sh@42 -- # jq -r . 00:11:02.058 [2024-07-24 10:36:28.417469] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:02.058 [2024-07-24 10:36:28.417722] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118472 ] 00:11:02.058 [2024-07-24 10:36:28.567208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.058 [2024-07-24 10:36:28.666086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=0x1 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=compare 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=software 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=32 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=32 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=1 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val=Yes 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:02.316 10:36:28 -- accel/accel.sh@21 -- # val= 00:11:02.316 10:36:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # IFS=: 00:11:02.316 10:36:28 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@21 -- # val= 00:11:03.689 10:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # IFS=: 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@21 -- # val= 00:11:03.689 10:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # IFS=: 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@21 -- # val= 00:11:03.689 10:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # IFS=: 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@21 -- # val= 00:11:03.689 10:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # IFS=: 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@21 -- # val= 00:11:03.689 10:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # IFS=: 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@21 -- # val= 00:11:03.689 10:36:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # IFS=: 00:11:03.689 10:36:29 -- accel/accel.sh@20 -- # read -r var val 00:11:03.689 10:36:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:03.689 10:36:29 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:03.689 10:36:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:03.689 00:11:03.689 real 0m3.065s 00:11:03.689 user 0m2.610s 00:11:03.689 sys 0m0.289s 00:11:03.689 10:36:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.689 10:36:29 -- common/autotest_common.sh@10 -- # set +x 00:11:03.689 ************************************ 00:11:03.689 END TEST accel_compare 00:11:03.689 ************************************ 00:11:03.689 10:36:29 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:03.689 10:36:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:03.689 10:36:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:03.689 10:36:29 -- common/autotest_common.sh@10 -- # set +x 00:11:03.689 ************************************ 00:11:03.689 START TEST accel_xor 00:11:03.689 ************************************ 00:11:03.689 10:36:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:03.689 10:36:29 -- accel/accel.sh@16 -- # local accel_opc 00:11:03.689 10:36:29 -- accel/accel.sh@17 -- # local accel_module 00:11:03.689 10:36:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:03.689 10:36:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:03.689 10:36:29 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.689 10:36:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.689 10:36:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.689 10:36:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.689 10:36:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.689 10:36:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.689 10:36:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.689 10:36:30 -- accel/accel.sh@42 -- # jq -r . 00:11:03.689 [2024-07-24 10:36:30.027224] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:03.689 [2024-07-24 10:36:30.027526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118517 ] 00:11:03.689 [2024-07-24 10:36:30.175654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.689 [2024-07-24 10:36:30.248562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.062 10:36:31 -- accel/accel.sh@18 -- # out=' 00:11:05.062 SPDK Configuration: 00:11:05.062 Core mask: 0x1 00:11:05.062 00:11:05.062 Accel Perf Configuration: 00:11:05.062 Workload Type: xor 00:11:05.062 Source buffers: 2 00:11:05.062 Transfer size: 4096 bytes 00:11:05.062 Vector count 1 00:11:05.062 Module: software 00:11:05.062 Queue depth: 32 00:11:05.062 Allocate depth: 32 00:11:05.062 # threads/core: 1 00:11:05.062 Run time: 1 seconds 00:11:05.062 Verify: Yes 00:11:05.062 00:11:05.062 Running for 1 seconds... 00:11:05.062 00:11:05.062 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:05.062 ------------------------------------------------------------------------------------ 00:11:05.062 0,0 210304/s 821 MiB/s 0 0 00:11:05.062 ==================================================================================== 00:11:05.062 Total 210304/s 821 MiB/s 0 0' 00:11:05.062 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.062 10:36:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:05.062 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.062 10:36:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.062 10:36:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:05.062 10:36:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.062 10:36:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.062 10:36:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.062 10:36:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.062 10:36:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.062 10:36:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.062 10:36:31 -- accel/accel.sh@42 -- # jq -r . 00:11:05.062 [2024-07-24 10:36:31.516301] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:05.062 [2024-07-24 10:36:31.516553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118540 ] 00:11:05.062 [2024-07-24 10:36:31.665005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.396 [2024-07-24 10:36:31.756952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=0x1 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=xor 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=2 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=software 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@23 -- # accel_module=software 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=32 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=32 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=1 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val=Yes 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:05.396 10:36:31 -- accel/accel.sh@21 -- # val= 00:11:05.396 10:36:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # IFS=: 00:11:05.396 10:36:31 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@21 -- # val= 00:11:06.771 10:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # IFS=: 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@21 -- # val= 00:11:06.771 10:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # IFS=: 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@21 -- # val= 00:11:06.771 10:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # IFS=: 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@21 -- # val= 00:11:06.771 10:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # IFS=: 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@21 -- # val= 00:11:06.771 10:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # IFS=: 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@21 -- # val= 00:11:06.771 10:36:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # IFS=: 00:11:06.771 10:36:33 -- accel/accel.sh@20 -- # read -r var val 00:11:06.771 10:36:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:06.771 10:36:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:06.771 10:36:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:06.771 00:11:06.771 real 0m3.033s 00:11:06.771 user 0m2.596s 00:11:06.771 sys 0m0.273s 00:11:06.771 10:36:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.771 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:11:06.771 ************************************ 00:11:06.771 END TEST accel_xor 00:11:06.771 ************************************ 00:11:06.771 10:36:33 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:06.771 10:36:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:06.771 10:36:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.771 10:36:33 -- common/autotest_common.sh@10 -- # set +x 00:11:06.771 ************************************ 00:11:06.771 START TEST accel_xor 00:11:06.771 ************************************ 00:11:06.771 10:36:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:06.771 10:36:33 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.771 10:36:33 -- accel/accel.sh@17 -- # local accel_module 00:11:06.771 10:36:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:06.771 10:36:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:06.771 10:36:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.771 10:36:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.771 10:36:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.771 10:36:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.771 10:36:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.771 10:36:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.771 10:36:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.771 10:36:33 -- accel/accel.sh@42 -- # jq -r . 00:11:06.771 [2024-07-24 10:36:33.109496] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:06.771 [2024-07-24 10:36:33.109767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118585 ] 00:11:06.771 [2024-07-24 10:36:33.265236] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.771 [2024-07-24 10:36:33.339099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.147 10:36:34 -- accel/accel.sh@18 -- # out=' 00:11:08.147 SPDK Configuration: 00:11:08.147 Core mask: 0x1 00:11:08.147 00:11:08.147 Accel Perf Configuration: 00:11:08.147 Workload Type: xor 00:11:08.147 Source buffers: 3 00:11:08.147 Transfer size: 4096 bytes 00:11:08.147 Vector count 1 00:11:08.147 Module: software 00:11:08.147 Queue depth: 32 00:11:08.147 Allocate depth: 32 00:11:08.147 # threads/core: 1 00:11:08.147 Run time: 1 seconds 00:11:08.147 Verify: Yes 00:11:08.147 00:11:08.147 Running for 1 seconds... 00:11:08.147 00:11:08.147 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.147 ------------------------------------------------------------------------------------ 00:11:08.147 0,0 186304/s 727 MiB/s 0 0 00:11:08.147 ==================================================================================== 00:11:08.147 Total 186304/s 727 MiB/s 0 0' 00:11:08.147 10:36:34 -- accel/accel.sh@20 -- # IFS=: 00:11:08.147 10:36:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:08.147 10:36:34 -- accel/accel.sh@20 -- # read -r var val 00:11:08.147 10:36:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:08.147 10:36:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.147 10:36:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.147 10:36:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.147 10:36:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.147 10:36:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.147 10:36:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.147 10:36:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.147 10:36:34 -- accel/accel.sh@42 -- # jq -r . 00:11:08.147 [2024-07-24 10:36:34.691545] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:08.147 [2024-07-24 10:36:34.691911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118606 ] 00:11:08.405 [2024-07-24 10:36:34.833911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.405 [2024-07-24 10:36:34.964936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=0x1 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=xor 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=3 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=software 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=32 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=32 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=1 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val=Yes 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:08.405 10:36:35 -- accel/accel.sh@21 -- # val= 00:11:08.405 10:36:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # IFS=: 00:11:08.405 10:36:35 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@21 -- # val= 00:11:09.779 10:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # IFS=: 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@21 -- # val= 00:11:09.779 10:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # IFS=: 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@21 -- # val= 00:11:09.779 10:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # IFS=: 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@21 -- # val= 00:11:09.779 10:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # IFS=: 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@21 -- # val= 00:11:09.779 10:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # IFS=: 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@21 -- # val= 00:11:09.779 10:36:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # IFS=: 00:11:09.779 10:36:36 -- accel/accel.sh@20 -- # read -r var val 00:11:09.779 10:36:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:09.779 10:36:36 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:09.779 10:36:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.779 00:11:09.779 real 0m3.273s 00:11:09.779 user 0m2.758s 00:11:09.779 sys 0m0.338s 00:11:09.779 10:36:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.779 10:36:36 -- common/autotest_common.sh@10 -- # set +x 00:11:09.779 ************************************ 00:11:09.779 END TEST accel_xor 00:11:09.779 ************************************ 00:11:09.779 10:36:36 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:09.779 10:36:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:09.779 10:36:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.779 10:36:36 -- common/autotest_common.sh@10 -- # set +x 00:11:09.779 ************************************ 00:11:09.779 START TEST accel_dif_verify 00:11:09.779 ************************************ 00:11:09.779 10:36:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:09.779 10:36:36 -- accel/accel.sh@16 -- # local accel_opc 00:11:09.779 10:36:36 -- accel/accel.sh@17 -- # local accel_module 00:11:09.779 10:36:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:09.779 10:36:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:09.779 10:36:36 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.779 10:36:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:09.779 10:36:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.779 10:36:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.779 10:36:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:09.779 10:36:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:09.779 10:36:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:09.779 10:36:36 -- accel/accel.sh@42 -- # jq -r . 00:11:09.779 [2024-07-24 10:36:36.438747] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:09.779 [2024-07-24 10:36:36.440089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118653 ] 00:11:10.037 [2024-07-24 10:36:36.599017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.295 [2024-07-24 10:36:36.721517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.668 10:36:38 -- accel/accel.sh@18 -- # out=' 00:11:11.668 SPDK Configuration: 00:11:11.668 Core mask: 0x1 00:11:11.668 00:11:11.668 Accel Perf Configuration: 00:11:11.668 Workload Type: dif_verify 00:11:11.668 Vector size: 4096 bytes 00:11:11.668 Transfer size: 4096 bytes 00:11:11.668 Block size: 512 bytes 00:11:11.668 Metadata size: 8 bytes 00:11:11.668 Vector count 1 00:11:11.668 Module: software 00:11:11.668 Queue depth: 32 00:11:11.668 Allocate depth: 32 00:11:11.668 # threads/core: 1 00:11:11.668 Run time: 1 seconds 00:11:11.668 Verify: No 00:11:11.668 00:11:11.668 Running for 1 seconds... 00:11:11.668 00:11:11.668 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:11.668 ------------------------------------------------------------------------------------ 00:11:11.668 0,0 90912/s 360 MiB/s 0 0 00:11:11.668 ==================================================================================== 00:11:11.668 Total 90912/s 355 MiB/s 0 0' 00:11:11.668 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.668 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.668 10:36:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:11.668 10:36:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:11.668 10:36:38 -- accel/accel.sh@12 -- # build_accel_config 00:11:11.668 10:36:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:11.668 10:36:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.668 10:36:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.668 10:36:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:11.668 10:36:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:11.668 10:36:38 -- accel/accel.sh@41 -- # local IFS=, 00:11:11.668 10:36:38 -- accel/accel.sh@42 -- # jq -r . 00:11:11.668 [2024-07-24 10:36:38.089724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:11.668 [2024-07-24 10:36:38.090169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118683 ] 00:11:11.668 [2024-07-24 10:36:38.231048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.926 [2024-07-24 10:36:38.355353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val=0x1 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val=dif_verify 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val=software 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.926 10:36:38 -- accel/accel.sh@23 -- # accel_module=software 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.926 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.926 10:36:38 -- accel/accel.sh@21 -- # val=32 00:11:11.926 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.927 10:36:38 -- accel/accel.sh@21 -- # val=32 00:11:11.927 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.927 10:36:38 -- accel/accel.sh@21 -- # val=1 00:11:11.927 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.927 10:36:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:11.927 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.927 10:36:38 -- accel/accel.sh@21 -- # val=No 00:11:11.927 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.927 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.927 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:11.927 10:36:38 -- accel/accel.sh@21 -- # val= 00:11:11.927 10:36:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # IFS=: 00:11:11.927 10:36:38 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@21 -- # val= 00:11:13.300 10:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # IFS=: 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@21 -- # val= 00:11:13.300 10:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # IFS=: 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@21 -- # val= 00:11:13.300 10:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # IFS=: 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@21 -- # val= 00:11:13.300 10:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # IFS=: 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@21 -- # val= 00:11:13.300 10:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # IFS=: 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@21 -- # val= 00:11:13.300 10:36:39 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # IFS=: 00:11:13.300 10:36:39 -- accel/accel.sh@20 -- # read -r var val 00:11:13.300 10:36:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:13.300 10:36:39 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:13.300 10:36:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:13.300 00:11:13.300 real 0m3.302s 00:11:13.300 user 0m2.780s 00:11:13.300 sys 0m0.353s 00:11:13.300 10:36:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.300 10:36:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.300 ************************************ 00:11:13.300 END TEST accel_dif_verify 00:11:13.300 ************************************ 00:11:13.300 10:36:39 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:13.300 10:36:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:13.300 10:36:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.300 10:36:39 -- common/autotest_common.sh@10 -- # set +x 00:11:13.300 ************************************ 00:11:13.300 START TEST accel_dif_generate 00:11:13.300 ************************************ 00:11:13.300 10:36:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:13.300 10:36:39 -- accel/accel.sh@16 -- # local accel_opc 00:11:13.300 10:36:39 -- accel/accel.sh@17 -- # local accel_module 00:11:13.300 10:36:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:13.300 10:36:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:13.300 10:36:39 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.300 10:36:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.300 10:36:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.300 10:36:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.300 10:36:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.300 10:36:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.300 10:36:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.300 10:36:39 -- accel/accel.sh@42 -- # jq -r . 00:11:13.300 [2024-07-24 10:36:39.793473] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:13.300 [2024-07-24 10:36:39.794334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118721 ] 00:11:13.300 [2024-07-24 10:36:39.943009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.559 [2024-07-24 10:36:40.071551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.934 10:36:41 -- accel/accel.sh@18 -- # out=' 00:11:14.934 SPDK Configuration: 00:11:14.934 Core mask: 0x1 00:11:14.934 00:11:14.934 Accel Perf Configuration: 00:11:14.934 Workload Type: dif_generate 00:11:14.934 Vector size: 4096 bytes 00:11:14.934 Transfer size: 4096 bytes 00:11:14.934 Block size: 512 bytes 00:11:14.934 Metadata size: 8 bytes 00:11:14.934 Vector count 1 00:11:14.934 Module: software 00:11:14.934 Queue depth: 32 00:11:14.934 Allocate depth: 32 00:11:14.934 # threads/core: 1 00:11:14.934 Run time: 1 seconds 00:11:14.934 Verify: No 00:11:14.934 00:11:14.934 Running for 1 seconds... 00:11:14.934 00:11:14.934 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:14.934 ------------------------------------------------------------------------------------ 00:11:14.934 0,0 110336/s 437 MiB/s 0 0 00:11:14.934 ==================================================================================== 00:11:14.934 Total 110336/s 431 MiB/s 0 0' 00:11:14.934 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:14.934 10:36:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:14.934 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:14.934 10:36:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:14.934 10:36:41 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.934 10:36:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.934 10:36:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.934 10:36:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.934 10:36:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.934 10:36:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.934 10:36:41 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.934 10:36:41 -- accel/accel.sh@42 -- # jq -r . 00:11:14.934 [2024-07-24 10:36:41.437124] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:14.934 [2024-07-24 10:36:41.437689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118751 ] 00:11:14.934 [2024-07-24 10:36:41.585378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.192 [2024-07-24 10:36:41.734645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val=0x1 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val=dif_generate 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.192 10:36:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:15.192 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.192 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val=software 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@23 -- # accel_module=software 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val=32 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val=32 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val=1 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val=No 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:15.193 10:36:41 -- accel/accel.sh@21 -- # val= 00:11:15.193 10:36:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # IFS=: 00:11:15.193 10:36:41 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@21 -- # val= 00:11:16.570 10:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # IFS=: 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@21 -- # val= 00:11:16.570 10:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # IFS=: 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@21 -- # val= 00:11:16.570 10:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # IFS=: 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@21 -- # val= 00:11:16.570 10:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # IFS=: 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@21 -- # val= 00:11:16.570 10:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # IFS=: 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@21 -- # val= 00:11:16.570 10:36:43 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # IFS=: 00:11:16.570 10:36:43 -- accel/accel.sh@20 -- # read -r var val 00:11:16.570 10:36:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:16.570 10:36:43 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:16.570 10:36:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:16.570 00:11:16.570 real 0m3.374s 00:11:16.570 user 0m2.846s 00:11:16.570 sys 0m0.360s 00:11:16.570 10:36:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.570 10:36:43 -- common/autotest_common.sh@10 -- # set +x 00:11:16.570 ************************************ 00:11:16.570 END TEST accel_dif_generate 00:11:16.570 ************************************ 00:11:16.570 10:36:43 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:16.570 10:36:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:16.570 10:36:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.570 10:36:43 -- common/autotest_common.sh@10 -- # set +x 00:11:16.570 ************************************ 00:11:16.570 START TEST accel_dif_generate_copy 00:11:16.570 ************************************ 00:11:16.570 10:36:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:16.570 10:36:43 -- accel/accel.sh@16 -- # local accel_opc 00:11:16.570 10:36:43 -- accel/accel.sh@17 -- # local accel_module 00:11:16.570 10:36:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:16.570 10:36:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:16.570 10:36:43 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.570 10:36:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.570 10:36:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.570 10:36:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.570 10:36:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.570 10:36:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.570 10:36:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.570 10:36:43 -- accel/accel.sh@42 -- # jq -r . 00:11:16.570 [2024-07-24 10:36:43.220504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:16.570 [2024-07-24 10:36:43.221489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118796 ] 00:11:16.829 [2024-07-24 10:36:43.371082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.829 [2024-07-24 10:36:43.482654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.202 10:36:44 -- accel/accel.sh@18 -- # out=' 00:11:18.202 SPDK Configuration: 00:11:18.202 Core mask: 0x1 00:11:18.202 00:11:18.202 Accel Perf Configuration: 00:11:18.202 Workload Type: dif_generate_copy 00:11:18.202 Vector size: 4096 bytes 00:11:18.202 Transfer size: 4096 bytes 00:11:18.202 Vector count 1 00:11:18.202 Module: software 00:11:18.202 Queue depth: 32 00:11:18.202 Allocate depth: 32 00:11:18.202 # threads/core: 1 00:11:18.202 Run time: 1 seconds 00:11:18.202 Verify: No 00:11:18.202 00:11:18.202 Running for 1 seconds... 00:11:18.202 00:11:18.202 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:18.202 ------------------------------------------------------------------------------------ 00:11:18.202 0,0 77696/s 308 MiB/s 0 0 00:11:18.202 ==================================================================================== 00:11:18.202 Total 77696/s 303 MiB/s 0 0' 00:11:18.202 10:36:44 -- accel/accel.sh@20 -- # IFS=: 00:11:18.202 10:36:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:18.202 10:36:44 -- accel/accel.sh@20 -- # read -r var val 00:11:18.202 10:36:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:18.202 10:36:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.202 10:36:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.202 10:36:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.202 10:36:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.202 10:36:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.202 10:36:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.202 10:36:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.202 10:36:44 -- accel/accel.sh@42 -- # jq -r . 00:11:18.202 [2024-07-24 10:36:44.767943] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:18.202 [2024-07-24 10:36:44.768351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118819 ] 00:11:18.461 [2024-07-24 10:36:44.915454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.461 [2024-07-24 10:36:45.026566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=0x1 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=software 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@23 -- # accel_module=software 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=32 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=32 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=1 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val=No 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:18.461 10:36:45 -- accel/accel.sh@21 -- # val= 00:11:18.461 10:36:45 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # IFS=: 00:11:18.461 10:36:45 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@21 -- # val= 00:11:19.837 10:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # IFS=: 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@21 -- # val= 00:11:19.837 10:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # IFS=: 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@21 -- # val= 00:11:19.837 10:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # IFS=: 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@21 -- # val= 00:11:19.837 10:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # IFS=: 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@21 -- # val= 00:11:19.837 10:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # IFS=: 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@21 -- # val= 00:11:19.837 10:36:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # IFS=: 00:11:19.837 10:36:46 -- accel/accel.sh@20 -- # read -r var val 00:11:19.837 10:36:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:19.837 10:36:46 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:19.837 10:36:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.837 00:11:19.837 real 0m3.117s 00:11:19.837 user 0m2.642s 00:11:19.837 sys 0m0.313s 00:11:19.837 10:36:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.837 10:36:46 -- common/autotest_common.sh@10 -- # set +x 00:11:19.837 ************************************ 00:11:19.837 END TEST accel_dif_generate_copy 00:11:19.837 ************************************ 00:11:19.837 10:36:46 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:19.837 10:36:46 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.837 10:36:46 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:19.837 10:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.837 10:36:46 -- common/autotest_common.sh@10 -- # set +x 00:11:19.837 ************************************ 00:11:19.837 START TEST accel_comp 00:11:19.837 ************************************ 00:11:19.837 10:36:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.837 10:36:46 -- accel/accel.sh@16 -- # local accel_opc 00:11:19.837 10:36:46 -- accel/accel.sh@17 -- # local accel_module 00:11:19.837 10:36:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.837 10:36:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.837 10:36:46 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.837 10:36:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.837 10:36:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.837 10:36:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.837 10:36:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.837 10:36:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.837 10:36:46 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.837 10:36:46 -- accel/accel.sh@42 -- # jq -r . 00:11:19.837 [2024-07-24 10:36:46.390644] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:19.837 [2024-07-24 10:36:46.391464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118865 ] 00:11:20.095 [2024-07-24 10:36:46.542839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.095 [2024-07-24 10:36:46.628651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.471 10:36:47 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:21.471 00:11:21.471 SPDK Configuration: 00:11:21.471 Core mask: 0x1 00:11:21.471 00:11:21.471 Accel Perf Configuration: 00:11:21.471 Workload Type: compress 00:11:21.471 Transfer size: 4096 bytes 00:11:21.471 Vector count 1 00:11:21.471 Module: software 00:11:21.471 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.471 Queue depth: 32 00:11:21.471 Allocate depth: 32 00:11:21.471 # threads/core: 1 00:11:21.471 Run time: 1 seconds 00:11:21.471 Verify: No 00:11:21.471 00:11:21.471 Running for 1 seconds... 00:11:21.471 00:11:21.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:21.471 ------------------------------------------------------------------------------------ 00:11:21.471 0,0 45792/s 190 MiB/s 0 0 00:11:21.471 ==================================================================================== 00:11:21.471 Total 45792/s 178 MiB/s 0 0' 00:11:21.471 10:36:47 -- accel/accel.sh@20 -- # IFS=: 00:11:21.471 10:36:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.471 10:36:47 -- accel/accel.sh@20 -- # read -r var val 00:11:21.471 10:36:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.471 10:36:47 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.471 10:36:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.471 10:36:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.471 10:36:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.471 10:36:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.471 10:36:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.471 10:36:47 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.471 10:36:47 -- accel/accel.sh@42 -- # jq -r . 00:11:21.471 [2024-07-24 10:36:47.915613] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:21.472 [2024-07-24 10:36:47.916625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118890 ] 00:11:21.472 [2024-07-24 10:36:48.069688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.730 [2024-07-24 10:36:48.166899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=0x1 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=compress 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=software 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@23 -- # accel_module=software 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=32 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=32 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=1 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val=No 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:21.730 10:36:48 -- accel/accel.sh@21 -- # val= 00:11:21.730 10:36:48 -- accel/accel.sh@22 -- # case "$var" in 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # IFS=: 00:11:21.730 10:36:48 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@21 -- # val= 00:11:23.106 10:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # IFS=: 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@21 -- # val= 00:11:23.106 10:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # IFS=: 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@21 -- # val= 00:11:23.106 10:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # IFS=: 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@21 -- # val= 00:11:23.106 10:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # IFS=: 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@21 -- # val= 00:11:23.106 10:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # IFS=: 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@21 -- # val= 00:11:23.106 10:36:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # IFS=: 00:11:23.106 10:36:49 -- accel/accel.sh@20 -- # read -r var val 00:11:23.106 10:36:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:23.106 10:36:49 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:23.106 10:36:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.106 00:11:23.106 real 0m3.074s 00:11:23.106 user 0m2.612s 00:11:23.106 sys 0m0.292s 00:11:23.106 10:36:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.106 10:36:49 -- common/autotest_common.sh@10 -- # set +x 00:11:23.106 ************************************ 00:11:23.106 END TEST accel_comp 00:11:23.106 ************************************ 00:11:23.106 10:36:49 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.106 10:36:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:23.106 10:36:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:23.106 10:36:49 -- common/autotest_common.sh@10 -- # set +x 00:11:23.106 ************************************ 00:11:23.106 START TEST accel_decomp 00:11:23.106 ************************************ 00:11:23.106 10:36:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.106 10:36:49 -- accel/accel.sh@16 -- # local accel_opc 00:11:23.106 10:36:49 -- accel/accel.sh@17 -- # local accel_module 00:11:23.106 10:36:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.106 10:36:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:23.106 10:36:49 -- accel/accel.sh@12 -- # build_accel_config 00:11:23.106 10:36:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:23.106 10:36:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.106 10:36:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.106 10:36:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:23.106 10:36:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:23.106 10:36:49 -- accel/accel.sh@41 -- # local IFS=, 00:11:23.106 10:36:49 -- accel/accel.sh@42 -- # jq -r . 00:11:23.106 [2024-07-24 10:36:49.517588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:23.106 [2024-07-24 10:36:49.517857] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118931 ] 00:11:23.106 [2024-07-24 10:36:49.667255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.106 [2024-07-24 10:36:49.757687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.481 10:36:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:24.481 00:11:24.481 SPDK Configuration: 00:11:24.481 Core mask: 0x1 00:11:24.481 00:11:24.481 Accel Perf Configuration: 00:11:24.481 Workload Type: decompress 00:11:24.481 Transfer size: 4096 bytes 00:11:24.481 Vector count 1 00:11:24.481 Module: software 00:11:24.481 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.481 Queue depth: 32 00:11:24.481 Allocate depth: 32 00:11:24.481 # threads/core: 1 00:11:24.481 Run time: 1 seconds 00:11:24.481 Verify: Yes 00:11:24.481 00:11:24.481 Running for 1 seconds... 00:11:24.481 00:11:24.481 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:24.481 ------------------------------------------------------------------------------------ 00:11:24.481 0,0 57408/s 105 MiB/s 0 0 00:11:24.481 ==================================================================================== 00:11:24.481 Total 57408/s 224 MiB/s 0 0' 00:11:24.481 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.481 10:36:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:24.481 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.481 10:36:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.481 10:36:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:24.481 10:36:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.481 10:36:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.481 10:36:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.481 10:36:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.481 10:36:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.481 10:36:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.481 10:36:51 -- accel/accel.sh@42 -- # jq -r . 00:11:24.481 [2024-07-24 10:36:51.035396] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:24.481 [2024-07-24 10:36:51.035682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118966 ] 00:11:24.739 [2024-07-24 10:36:51.184667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.739 [2024-07-24 10:36:51.280417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.739 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.739 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.739 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.739 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.739 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.739 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.739 10:36:51 -- accel/accel.sh@21 -- # val=0x1 00:11:24.739 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.739 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.739 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.739 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.739 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.739 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=decompress 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=software 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@23 -- # accel_module=software 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=32 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=32 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=1 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val=Yes 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:24.740 10:36:51 -- accel/accel.sh@21 -- # val= 00:11:24.740 10:36:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # IFS=: 00:11:24.740 10:36:51 -- accel/accel.sh@20 -- # read -r var val 00:11:26.115 10:36:52 -- accel/accel.sh@21 -- # val= 00:11:26.115 10:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.115 10:36:52 -- accel/accel.sh@20 -- # IFS=: 00:11:26.115 10:36:52 -- accel/accel.sh@20 -- # read -r var val 00:11:26.115 10:36:52 -- accel/accel.sh@21 -- # val= 00:11:26.115 10:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.115 10:36:52 -- accel/accel.sh@20 -- # IFS=: 00:11:26.115 10:36:52 -- accel/accel.sh@20 -- # read -r var val 00:11:26.115 10:36:52 -- accel/accel.sh@21 -- # val= 00:11:26.115 10:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # IFS=: 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # read -r var val 00:11:26.116 10:36:52 -- accel/accel.sh@21 -- # val= 00:11:26.116 10:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # IFS=: 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # read -r var val 00:11:26.116 10:36:52 -- accel/accel.sh@21 -- # val= 00:11:26.116 10:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # IFS=: 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # read -r var val 00:11:26.116 10:36:52 -- accel/accel.sh@21 -- # val= 00:11:26.116 10:36:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # IFS=: 00:11:26.116 10:36:52 -- accel/accel.sh@20 -- # read -r var val 00:11:26.116 10:36:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:26.116 10:36:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:26.116 10:36:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.116 00:11:26.116 real 0m3.065s 00:11:26.116 user 0m2.623s 00:11:26.116 sys 0m0.273s 00:11:26.116 10:36:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.116 10:36:52 -- common/autotest_common.sh@10 -- # set +x 00:11:26.116 ************************************ 00:11:26.116 END TEST accel_decomp 00:11:26.116 ************************************ 00:11:26.116 10:36:52 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:26.116 10:36:52 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:26.116 10:36:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.116 10:36:52 -- common/autotest_common.sh@10 -- # set +x 00:11:26.116 ************************************ 00:11:26.116 START TEST accel_decmop_full 00:11:26.116 ************************************ 00:11:26.116 10:36:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:26.116 10:36:52 -- accel/accel.sh@16 -- # local accel_opc 00:11:26.116 10:36:52 -- accel/accel.sh@17 -- # local accel_module 00:11:26.116 10:36:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:26.116 10:36:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.116 10:36:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:26.116 10:36:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.116 10:36:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.116 10:36:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.116 10:36:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.116 10:36:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.116 10:36:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.116 10:36:52 -- accel/accel.sh@42 -- # jq -r . 00:11:26.116 [2024-07-24 10:36:52.623943] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:26.116 [2024-07-24 10:36:52.624155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118999 ] 00:11:26.116 [2024-07-24 10:36:52.766449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.411 [2024-07-24 10:36:52.861998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.810 10:36:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:27.810 00:11:27.810 SPDK Configuration: 00:11:27.810 Core mask: 0x1 00:11:27.810 00:11:27.810 Accel Perf Configuration: 00:11:27.810 Workload Type: decompress 00:11:27.810 Transfer size: 111250 bytes 00:11:27.810 Vector count 1 00:11:27.810 Module: software 00:11:27.810 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:27.810 Queue depth: 32 00:11:27.810 Allocate depth: 32 00:11:27.810 # threads/core: 1 00:11:27.810 Run time: 1 seconds 00:11:27.810 Verify: Yes 00:11:27.810 00:11:27.810 Running for 1 seconds... 00:11:27.810 00:11:27.810 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:27.810 ------------------------------------------------------------------------------------ 00:11:27.810 0,0 4288/s 177 MiB/s 0 0 00:11:27.810 ==================================================================================== 00:11:27.810 Total 4288/s 454 MiB/s 0 0' 00:11:27.810 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:27.810 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:27.810 10:36:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:27.810 10:36:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:27.810 10:36:54 -- accel/accel.sh@12 -- # build_accel_config 00:11:27.810 10:36:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:27.810 10:36:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.810 10:36:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.810 10:36:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:27.810 10:36:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:27.810 10:36:54 -- accel/accel.sh@41 -- # local IFS=, 00:11:27.810 10:36:54 -- accel/accel.sh@42 -- # jq -r . 00:11:27.810 [2024-07-24 10:36:54.157190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:27.810 [2024-07-24 10:36:54.157974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119034 ] 00:11:27.810 [2024-07-24 10:36:54.309646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.810 [2024-07-24 10:36:54.415676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=0x1 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=decompress 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=software 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@23 -- # accel_module=software 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=32 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=32 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=1 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val=Yes 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:28.069 10:36:54 -- accel/accel.sh@21 -- # val= 00:11:28.069 10:36:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # IFS=: 00:11:28.069 10:36:54 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@21 -- # val= 00:11:29.445 10:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # IFS=: 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@21 -- # val= 00:11:29.445 10:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # IFS=: 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@21 -- # val= 00:11:29.445 10:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # IFS=: 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@21 -- # val= 00:11:29.445 10:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # IFS=: 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@21 -- # val= 00:11:29.445 10:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # IFS=: 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@21 -- # val= 00:11:29.445 10:36:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # IFS=: 00:11:29.445 10:36:55 -- accel/accel.sh@20 -- # read -r var val 00:11:29.445 10:36:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:29.445 10:36:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:29.445 10:36:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.445 00:11:29.445 real 0m3.099s 00:11:29.445 user 0m2.590s 00:11:29.445 sys 0m0.328s 00:11:29.445 10:36:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.445 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:11:29.445 ************************************ 00:11:29.445 END TEST accel_decmop_full 00:11:29.445 ************************************ 00:11:29.445 10:36:55 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:29.445 10:36:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:29.445 10:36:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:29.445 10:36:55 -- common/autotest_common.sh@10 -- # set +x 00:11:29.445 ************************************ 00:11:29.445 START TEST accel_decomp_mcore 00:11:29.445 ************************************ 00:11:29.445 10:36:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:29.445 10:36:55 -- accel/accel.sh@16 -- # local accel_opc 00:11:29.445 10:36:55 -- accel/accel.sh@17 -- # local accel_module 00:11:29.445 10:36:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:29.445 10:36:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:29.445 10:36:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:29.445 10:36:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:29.445 10:36:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.445 10:36:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.445 10:36:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:29.445 10:36:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:29.445 10:36:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:29.445 10:36:55 -- accel/accel.sh@42 -- # jq -r . 00:11:29.445 [2024-07-24 10:36:55.766631] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:29.445 [2024-07-24 10:36:55.766841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119067 ] 00:11:29.445 [2024-07-24 10:36:55.931557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.445 [2024-07-24 10:36:56.023246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.445 [2024-07-24 10:36:56.023406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.445 [2024-07-24 10:36:56.024247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.445 [2024-07-24 10:36:56.024192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.817 10:36:57 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:30.817 00:11:30.817 SPDK Configuration: 00:11:30.817 Core mask: 0xf 00:11:30.817 00:11:30.817 Accel Perf Configuration: 00:11:30.817 Workload Type: decompress 00:11:30.817 Transfer size: 4096 bytes 00:11:30.817 Vector count 1 00:11:30.817 Module: software 00:11:30.817 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:30.817 Queue depth: 32 00:11:30.817 Allocate depth: 32 00:11:30.817 # threads/core: 1 00:11:30.817 Run time: 1 seconds 00:11:30.817 Verify: Yes 00:11:30.817 00:11:30.817 Running for 1 seconds... 00:11:30.817 00:11:30.817 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:30.817 ------------------------------------------------------------------------------------ 00:11:30.817 0,0 52832/s 97 MiB/s 0 0 00:11:30.817 3,0 50688/s 93 MiB/s 0 0 00:11:30.817 2,0 51776/s 95 MiB/s 0 0 00:11:30.817 1,0 47776/s 88 MiB/s 0 0 00:11:30.817 ==================================================================================== 00:11:30.817 Total 203072/s 793 MiB/s 0 0' 00:11:30.817 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:30.817 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:30.817 10:36:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:30.817 10:36:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:30.817 10:36:57 -- accel/accel.sh@12 -- # build_accel_config 00:11:30.817 10:36:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:30.817 10:36:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.817 10:36:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.817 10:36:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:30.817 10:36:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:30.817 10:36:57 -- accel/accel.sh@41 -- # local IFS=, 00:11:30.817 10:36:57 -- accel/accel.sh@42 -- # jq -r . 00:11:30.817 [2024-07-24 10:36:57.326355] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:30.817 [2024-07-24 10:36:57.326614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119106 ] 00:11:30.817 [2024-07-24 10:36:57.493009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.075 [2024-07-24 10:36:57.620194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.075 [2024-07-24 10:36:57.620280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.075 [2024-07-24 10:36:57.621076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.075 [2024-07-24 10:36:57.621119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=0xf 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=decompress 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=software 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=32 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=32 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=1 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val=Yes 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:31.075 10:36:57 -- accel/accel.sh@21 -- # val= 00:11:31.075 10:36:57 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # IFS=: 00:11:31.075 10:36:57 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@21 -- # val= 00:11:32.448 10:36:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # IFS=: 00:11:32.448 10:36:58 -- accel/accel.sh@20 -- # read -r var val 00:11:32.448 10:36:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:32.448 10:36:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:32.448 10:36:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:32.448 00:11:32.448 real 0m3.169s 00:11:32.448 user 0m9.627s 00:11:32.448 sys 0m0.345s 00:11:32.448 10:36:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.448 ************************************ 00:11:32.448 END TEST accel_decomp_mcore 00:11:32.448 ************************************ 00:11:32.448 10:36:58 -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 10:36:58 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:32.448 10:36:58 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:32.448 10:36:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:32.448 10:36:58 -- common/autotest_common.sh@10 -- # set +x 00:11:32.448 ************************************ 00:11:32.448 START TEST accel_decomp_full_mcore 00:11:32.448 ************************************ 00:11:32.448 10:36:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:32.448 10:36:58 -- accel/accel.sh@16 -- # local accel_opc 00:11:32.448 10:36:58 -- accel/accel.sh@17 -- # local accel_module 00:11:32.448 10:36:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:32.448 10:36:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:32.448 10:36:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:32.448 10:36:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:32.448 10:36:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.448 10:36:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.448 10:36:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:32.448 10:36:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:32.448 10:36:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:32.448 10:36:58 -- accel/accel.sh@42 -- # jq -r . 00:11:32.448 [2024-07-24 10:36:58.980934] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:32.448 [2024-07-24 10:36:58.981142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119143 ] 00:11:32.707 [2024-07-24 10:36:59.143184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.707 [2024-07-24 10:36:59.241472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.707 [2024-07-24 10:36:59.241569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.707 [2024-07-24 10:36:59.241710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.707 [2024-07-24 10:36:59.241895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.100 10:37:00 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:34.100 00:11:34.100 SPDK Configuration: 00:11:34.100 Core mask: 0xf 00:11:34.100 00:11:34.100 Accel Perf Configuration: 00:11:34.100 Workload Type: decompress 00:11:34.100 Transfer size: 111250 bytes 00:11:34.100 Vector count 1 00:11:34.100 Module: software 00:11:34.100 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:34.100 Queue depth: 32 00:11:34.100 Allocate depth: 32 00:11:34.100 # threads/core: 1 00:11:34.100 Run time: 1 seconds 00:11:34.100 Verify: Yes 00:11:34.100 00:11:34.100 Running for 1 seconds... 00:11:34.100 00:11:34.100 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:34.100 ------------------------------------------------------------------------------------ 00:11:34.100 0,0 4320/s 178 MiB/s 0 0 00:11:34.100 3,0 4224/s 174 MiB/s 0 0 00:11:34.100 2,0 4384/s 181 MiB/s 0 0 00:11:34.100 1,0 4416/s 182 MiB/s 0 0 00:11:34.100 ==================================================================================== 00:11:34.100 Total 17344/s 1840 MiB/s 0 0' 00:11:34.100 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.100 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.100 10:37:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:34.100 10:37:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:34.100 10:37:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:34.100 10:37:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:34.100 10:37:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:34.100 10:37:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:34.100 10:37:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:34.100 10:37:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:34.100 10:37:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:34.100 10:37:00 -- accel/accel.sh@42 -- # jq -r . 00:11:34.100 [2024-07-24 10:37:00.554612] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:34.100 [2024-07-24 10:37:00.554979] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119176 ] 00:11:34.100 [2024-07-24 10:37:00.730299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.380 [2024-07-24 10:37:00.843819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.380 [2024-07-24 10:37:00.843966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.380 [2024-07-24 10:37:00.844772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.380 [2024-07-24 10:37:00.844813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=0xf 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=decompress 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=software 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@23 -- # accel_module=software 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=32 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=32 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=1 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val=Yes 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:34.380 10:37:00 -- accel/accel.sh@21 -- # val= 00:11:34.380 10:37:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # IFS=: 00:11:34.380 10:37:00 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@21 -- # val= 00:11:35.769 10:37:02 -- accel/accel.sh@22 -- # case "$var" in 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # IFS=: 00:11:35.769 10:37:02 -- accel/accel.sh@20 -- # read -r var val 00:11:35.769 10:37:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:35.769 10:37:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:35.769 10:37:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:35.769 00:11:35.769 real 0m3.182s 00:11:35.769 user 0m9.725s 00:11:35.769 sys 0m0.338s 00:11:35.769 10:37:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.769 10:37:02 -- common/autotest_common.sh@10 -- # set +x 00:11:35.769 ************************************ 00:11:35.769 END TEST accel_decomp_full_mcore 00:11:35.769 ************************************ 00:11:35.769 10:37:02 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:35.769 10:37:02 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:35.769 10:37:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:35.769 10:37:02 -- common/autotest_common.sh@10 -- # set +x 00:11:35.769 ************************************ 00:11:35.769 START TEST accel_decomp_mthread 00:11:35.769 ************************************ 00:11:35.769 10:37:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:35.769 10:37:02 -- accel/accel.sh@16 -- # local accel_opc 00:11:35.769 10:37:02 -- accel/accel.sh@17 -- # local accel_module 00:11:35.769 10:37:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:35.769 10:37:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:35.769 10:37:02 -- accel/accel.sh@12 -- # build_accel_config 00:11:35.769 10:37:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:35.769 10:37:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:35.769 10:37:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:35.770 10:37:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:35.770 10:37:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:35.770 10:37:02 -- accel/accel.sh@41 -- # local IFS=, 00:11:35.770 10:37:02 -- accel/accel.sh@42 -- # jq -r . 00:11:35.770 [2024-07-24 10:37:02.213397] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:35.770 [2024-07-24 10:37:02.213768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119225 ] 00:11:35.770 [2024-07-24 10:37:02.355353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.770 [2024-07-24 10:37:02.446635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.142 10:37:03 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:37.142 00:11:37.142 SPDK Configuration: 00:11:37.142 Core mask: 0x1 00:11:37.142 00:11:37.142 Accel Perf Configuration: 00:11:37.142 Workload Type: decompress 00:11:37.142 Transfer size: 4096 bytes 00:11:37.142 Vector count 1 00:11:37.142 Module: software 00:11:37.142 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.142 Queue depth: 32 00:11:37.142 Allocate depth: 32 00:11:37.142 # threads/core: 2 00:11:37.142 Run time: 1 seconds 00:11:37.142 Verify: Yes 00:11:37.142 00:11:37.142 Running for 1 seconds... 00:11:37.142 00:11:37.142 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:37.142 ------------------------------------------------------------------------------------ 00:11:37.142 0,1 29696/s 54 MiB/s 0 0 00:11:37.142 0,0 29536/s 54 MiB/s 0 0 00:11:37.142 ==================================================================================== 00:11:37.142 Total 59232/s 231 MiB/s 0 0' 00:11:37.142 10:37:03 -- accel/accel.sh@20 -- # IFS=: 00:11:37.142 10:37:03 -- accel/accel.sh@20 -- # read -r var val 00:11:37.142 10:37:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:37.142 10:37:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:37.142 10:37:03 -- accel/accel.sh@12 -- # build_accel_config 00:11:37.142 10:37:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:37.142 10:37:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.142 10:37:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.142 10:37:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:37.142 10:37:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:37.142 10:37:03 -- accel/accel.sh@41 -- # local IFS=, 00:11:37.142 10:37:03 -- accel/accel.sh@42 -- # jq -r . 00:11:37.142 [2024-07-24 10:37:03.728547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:37.142 [2024-07-24 10:37:03.728893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119247 ] 00:11:37.398 [2024-07-24 10:37:03.871267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.398 [2024-07-24 10:37:03.973835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.398 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.398 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.398 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.398 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.398 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.398 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.398 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.398 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=0x1 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=decompress 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=software 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@23 -- # accel_module=software 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=32 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=32 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=2 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val=Yes 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:37.399 10:37:04 -- accel/accel.sh@21 -- # val= 00:11:37.399 10:37:04 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # IFS=: 00:11:37.399 10:37:04 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@21 -- # val= 00:11:38.789 10:37:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # IFS=: 00:11:38.789 10:37:05 -- accel/accel.sh@20 -- # read -r var val 00:11:38.789 10:37:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:38.789 10:37:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:38.789 10:37:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:38.789 00:11:38.789 real 0m3.066s 00:11:38.789 user 0m2.617s 00:11:38.789 sys 0m0.278s 00:11:38.789 10:37:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.789 10:37:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.789 ************************************ 00:11:38.789 END TEST accel_decomp_mthread 00:11:38.789 ************************************ 00:11:38.789 10:37:05 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:38.789 10:37:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:38.789 10:37:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.789 10:37:05 -- common/autotest_common.sh@10 -- # set +x 00:11:38.789 ************************************ 00:11:38.789 START TEST accel_deomp_full_mthread 00:11:38.789 ************************************ 00:11:38.789 10:37:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:38.789 10:37:05 -- accel/accel.sh@16 -- # local accel_opc 00:11:38.789 10:37:05 -- accel/accel.sh@17 -- # local accel_module 00:11:38.789 10:37:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:38.790 10:37:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:38.790 10:37:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:38.790 10:37:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:38.790 10:37:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:38.790 10:37:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:38.790 10:37:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:38.790 10:37:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:38.790 10:37:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:38.790 10:37:05 -- accel/accel.sh@42 -- # jq -r . 00:11:38.790 [2024-07-24 10:37:05.337706] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:38.790 [2024-07-24 10:37:05.338144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119292 ] 00:11:39.047 [2024-07-24 10:37:05.486019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.047 [2024-07-24 10:37:05.558874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.418 10:37:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:40.418 00:11:40.418 SPDK Configuration: 00:11:40.418 Core mask: 0x1 00:11:40.418 00:11:40.418 Accel Perf Configuration: 00:11:40.418 Workload Type: decompress 00:11:40.418 Transfer size: 111250 bytes 00:11:40.418 Vector count 1 00:11:40.418 Module: software 00:11:40.418 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:40.418 Queue depth: 32 00:11:40.418 Allocate depth: 32 00:11:40.418 # threads/core: 2 00:11:40.418 Run time: 1 seconds 00:11:40.418 Verify: Yes 00:11:40.418 00:11:40.418 Running for 1 seconds... 00:11:40.418 00:11:40.418 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:40.418 ------------------------------------------------------------------------------------ 00:11:40.418 0,1 2144/s 88 MiB/s 0 0 00:11:40.418 0,0 2144/s 88 MiB/s 0 0 00:11:40.418 ==================================================================================== 00:11:40.418 Total 4288/s 454 MiB/s 0 0' 00:11:40.418 10:37:06 -- accel/accel.sh@20 -- # IFS=: 00:11:40.418 10:37:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:40.418 10:37:06 -- accel/accel.sh@20 -- # read -r var val 00:11:40.418 10:37:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:40.418 10:37:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:40.418 10:37:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:40.418 10:37:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:40.418 10:37:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:40.418 10:37:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:40.418 10:37:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:40.418 10:37:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:40.418 10:37:06 -- accel/accel.sh@42 -- # jq -r . 00:11:40.418 [2024-07-24 10:37:06.883610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:40.418 [2024-07-24 10:37:06.884026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119315 ] 00:11:40.418 [2024-07-24 10:37:07.032831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.674 [2024-07-24 10:37:07.128281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=0x1 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=decompress 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=software 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@23 -- # accel_module=software 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=32 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=32 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=2 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val=Yes 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:40.674 10:37:07 -- accel/accel.sh@21 -- # val= 00:11:40.674 10:37:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # IFS=: 00:11:40.674 10:37:07 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@21 -- # val= 00:11:42.044 10:37:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # IFS=: 00:11:42.044 10:37:08 -- accel/accel.sh@20 -- # read -r var val 00:11:42.044 10:37:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:42.044 10:37:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:42.044 10:37:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:42.044 00:11:42.044 real 0m3.129s 00:11:42.044 user 0m2.635s 00:11:42.044 sys 0m0.322s 00:11:42.044 10:37:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.044 10:37:08 -- common/autotest_common.sh@10 -- # set +x 00:11:42.044 ************************************ 00:11:42.044 END TEST accel_deomp_full_mthread 00:11:42.044 ************************************ 00:11:42.044 10:37:08 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:42.044 10:37:08 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:42.044 10:37:08 -- accel/accel.sh@129 -- # build_accel_config 00:11:42.044 10:37:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:42.044 10:37:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:42.044 10:37:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.044 10:37:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.044 10:37:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:42.044 10:37:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:42.044 10:37:08 -- accel/accel.sh@41 -- # local IFS=, 00:11:42.044 10:37:08 -- accel/accel.sh@42 -- # jq -r . 00:11:42.044 10:37:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.044 10:37:08 -- common/autotest_common.sh@10 -- # set +x 00:11:42.044 ************************************ 00:11:42.044 START TEST accel_dif_functional_tests 00:11:42.044 ************************************ 00:11:42.044 10:37:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:42.044 [2024-07-24 10:37:08.574731] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:42.044 [2024-07-24 10:37:08.575487] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119364 ] 00:11:42.304 [2024-07-24 10:37:08.749954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.304 [2024-07-24 10:37:08.844089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.304 [2024-07-24 10:37:08.844228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.304 [2024-07-24 10:37:08.844233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.304 00:11:42.304 00:11:42.304 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.304 http://cunit.sourceforge.net/ 00:11:42.304 00:11:42.304 00:11:42.304 Suite: accel_dif 00:11:42.304 Test: verify: DIF generated, GUARD check ...passed 00:11:42.304 Test: verify: DIF generated, APPTAG check ...passed 00:11:42.304 Test: verify: DIF generated, REFTAG check ...passed 00:11:42.304 Test: verify: DIF not generated, GUARD check ...[2024-07-24 10:37:08.939716] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:42.304 [2024-07-24 10:37:08.940003] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:42.304 passed 00:11:42.304 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 10:37:08.940425] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:42.304 [2024-07-24 10:37:08.940646] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:42.304 passed 00:11:42.304 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 10:37:08.941031] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:42.304 [2024-07-24 10:37:08.941263] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:42.304 passed 00:11:42.304 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:42.304 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 10:37:08.941921] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:42.304 passed 00:11:42.304 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:42.304 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:42.304 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:42.304 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 10:37:08.943040] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:42.304 passed 00:11:42.304 Test: generate copy: DIF generated, GUARD check ...passed 00:11:42.304 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:42.304 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:42.304 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:42.304 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:42.304 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:42.304 Test: generate copy: iovecs-len validate ...[2024-07-24 10:37:08.944898] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:42.304 passed 00:11:42.304 Test: generate copy: buffer alignment validate ...passed 00:11:42.304 00:11:42.304 Run Summary: Type Total Ran Passed Failed Inactive 00:11:42.304 suites 1 1 n/a 0 0 00:11:42.304 tests 20 20 20 0 0 00:11:42.304 asserts 204 204 204 0 n/a 00:11:42.304 00:11:42.304 Elapsed time = 0.019 seconds 00:11:42.562 00:11:42.562 real 0m0.712s 00:11:42.562 user 0m0.859s 00:11:42.562 sys 0m0.250s 00:11:42.562 10:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.562 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:11:42.562 ************************************ 00:11:42.562 END TEST accel_dif_functional_tests 00:11:42.562 ************************************ 00:11:42.819 ************************************ 00:11:42.819 END TEST accel 00:11:42.819 ************************************ 00:11:42.819 00:11:42.819 real 1m8.208s 00:11:42.819 user 1m11.573s 00:11:42.819 sys 0m8.163s 00:11:42.819 10:37:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.819 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:11:42.819 10:37:09 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:42.819 10:37:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:42.819 10:37:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.819 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:11:42.819 ************************************ 00:11:42.819 START TEST accel_rpc 00:11:42.819 ************************************ 00:11:42.819 10:37:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:42.819 * Looking for test storage... 00:11:42.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:42.819 10:37:09 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:42.819 10:37:09 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=119441 00:11:42.819 10:37:09 -- accel/accel_rpc.sh@15 -- # waitforlisten 119441 00:11:42.819 10:37:09 -- common/autotest_common.sh@819 -- # '[' -z 119441 ']' 00:11:42.819 10:37:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.819 10:37:09 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:42.819 10:37:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:42.820 10:37:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.820 10:37:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:42.820 10:37:09 -- common/autotest_common.sh@10 -- # set +x 00:11:42.820 [2024-07-24 10:37:09.432748] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:42.820 [2024-07-24 10:37:09.433201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119441 ] 00:11:43.078 [2024-07-24 10:37:09.582694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.078 [2024-07-24 10:37:09.673967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:43.078 [2024-07-24 10:37:09.674532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.010 10:37:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.010 10:37:10 -- common/autotest_common.sh@852 -- # return 0 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:44.010 10:37:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:44.010 10:37:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.010 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.010 ************************************ 00:11:44.010 START TEST accel_assign_opcode 00:11:44.010 ************************************ 00:11:44.010 10:37:10 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:44.010 10:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.010 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.010 [2024-07-24 10:37:10.427617] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:44.010 10:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:44.010 10:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.010 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.010 [2024-07-24 10:37:10.435595] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:44.010 10:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:44.010 10:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.010 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.010 10:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:44.010 10:37:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:44.010 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.010 10:37:10 -- accel/accel_rpc.sh@42 -- # grep software 00:11:44.010 10:37:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:44.267 software 00:11:44.267 00:11:44.267 real 0m0.295s 00:11:44.268 user 0m0.053s 00:11:44.268 sys 0m0.010s 00:11:44.268 10:37:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.268 10:37:10 -- common/autotest_common.sh@10 -- # set +x 00:11:44.268 ************************************ 00:11:44.268 END TEST accel_assign_opcode 00:11:44.268 ************************************ 00:11:44.268 10:37:10 -- accel/accel_rpc.sh@55 -- # killprocess 119441 00:11:44.268 10:37:10 -- common/autotest_common.sh@926 -- # '[' -z 119441 ']' 00:11:44.268 10:37:10 -- common/autotest_common.sh@930 -- # kill -0 119441 00:11:44.268 10:37:10 -- common/autotest_common.sh@931 -- # uname 00:11:44.268 10:37:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:44.268 10:37:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119441 00:11:44.268 10:37:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:44.268 10:37:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:44.268 10:37:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119441' 00:11:44.268 killing process with pid 119441 00:11:44.268 10:37:10 -- common/autotest_common.sh@945 -- # kill 119441 00:11:44.268 10:37:10 -- common/autotest_common.sh@950 -- # wait 119441 00:11:44.832 00:11:44.832 real 0m1.936s 00:11:44.832 user 0m2.018s 00:11:44.832 sys 0m0.479s 00:11:44.832 10:37:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:44.832 ************************************ 00:11:44.832 END TEST accel_rpc 00:11:44.832 10:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 ************************************ 00:11:44.832 10:37:11 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:44.832 10:37:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:44.832 10:37:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:44.832 10:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 ************************************ 00:11:44.832 START TEST app_cmdline 00:11:44.832 ************************************ 00:11:44.832 10:37:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:44.832 * Looking for test storage... 00:11:44.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:44.832 10:37:11 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:44.832 10:37:11 -- app/cmdline.sh@17 -- # spdk_tgt_pid=119532 00:11:44.832 10:37:11 -- app/cmdline.sh@18 -- # waitforlisten 119532 00:11:44.832 10:37:11 -- common/autotest_common.sh@819 -- # '[' -z 119532 ']' 00:11:44.832 10:37:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.832 10:37:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:44.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.832 10:37:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.832 10:37:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:44.832 10:37:11 -- common/autotest_common.sh@10 -- # set +x 00:11:44.832 10:37:11 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:44.832 [2024-07-24 10:37:11.417177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:44.832 [2024-07-24 10:37:11.417424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119532 ] 00:11:45.090 [2024-07-24 10:37:11.566687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.090 [2024-07-24 10:37:11.663397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:45.090 [2024-07-24 10:37:11.663754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.026 10:37:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:46.026 10:37:12 -- common/autotest_common.sh@852 -- # return 0 00:11:46.026 10:37:12 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:46.026 { 00:11:46.026 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:11:46.026 "fields": { 00:11:46.026 "major": 24, 00:11:46.026 "minor": 1, 00:11:46.026 "patch": 1, 00:11:46.026 "suffix": "-pre", 00:11:46.026 "commit": "dbef7efac" 00:11:46.026 } 00:11:46.026 } 00:11:46.026 10:37:12 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:46.026 10:37:12 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:46.026 10:37:12 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:46.026 10:37:12 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:46.026 10:37:12 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:46.026 10:37:12 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:46.026 10:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.026 10:37:12 -- app/cmdline.sh@26 -- # sort 00:11:46.026 10:37:12 -- common/autotest_common.sh@10 -- # set +x 00:11:46.026 10:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.026 10:37:12 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:46.026 10:37:12 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:46.026 10:37:12 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:46.026 10:37:12 -- common/autotest_common.sh@640 -- # local es=0 00:11:46.026 10:37:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:46.026 10:37:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.026 10:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.026 10:37:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.026 10:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.026 10:37:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.026 10:37:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:46.026 10:37:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.026 10:37:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:46.026 10:37:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:46.283 request: 00:11:46.283 { 00:11:46.283 "method": "env_dpdk_get_mem_stats", 00:11:46.283 "req_id": 1 00:11:46.283 } 00:11:46.283 Got JSON-RPC error response 00:11:46.283 response: 00:11:46.283 { 00:11:46.283 "code": -32601, 00:11:46.283 "message": "Method not found" 00:11:46.283 } 00:11:46.283 10:37:12 -- common/autotest_common.sh@643 -- # es=1 00:11:46.283 10:37:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:46.283 10:37:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:46.283 10:37:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:46.283 10:37:12 -- app/cmdline.sh@1 -- # killprocess 119532 00:11:46.283 10:37:12 -- common/autotest_common.sh@926 -- # '[' -z 119532 ']' 00:11:46.283 10:37:12 -- common/autotest_common.sh@930 -- # kill -0 119532 00:11:46.283 10:37:12 -- common/autotest_common.sh@931 -- # uname 00:11:46.283 10:37:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:46.283 10:37:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119532 00:11:46.283 10:37:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:46.283 10:37:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:46.283 10:37:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119532' 00:11:46.283 killing process with pid 119532 00:11:46.283 10:37:12 -- common/autotest_common.sh@945 -- # kill 119532 00:11:46.283 10:37:12 -- common/autotest_common.sh@950 -- # wait 119532 00:11:46.860 00:11:46.860 real 0m2.081s 00:11:46.860 user 0m2.515s 00:11:46.860 sys 0m0.526s 00:11:46.860 ************************************ 00:11:46.860 END TEST app_cmdline 00:11:46.860 ************************************ 00:11:46.860 10:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.860 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:11:46.860 10:37:13 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:46.860 10:37:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:46.860 10:37:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:46.860 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:11:46.860 ************************************ 00:11:46.860 START TEST version 00:11:46.860 ************************************ 00:11:46.860 10:37:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:46.860 * Looking for test storage... 00:11:46.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:46.860 10:37:13 -- app/version.sh@17 -- # get_header_version major 00:11:46.860 10:37:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:46.860 10:37:13 -- app/version.sh@14 -- # cut -f2 00:11:46.860 10:37:13 -- app/version.sh@14 -- # tr -d '"' 00:11:46.860 10:37:13 -- app/version.sh@17 -- # major=24 00:11:46.860 10:37:13 -- app/version.sh@18 -- # get_header_version minor 00:11:46.860 10:37:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:46.860 10:37:13 -- app/version.sh@14 -- # cut -f2 00:11:46.860 10:37:13 -- app/version.sh@14 -- # tr -d '"' 00:11:46.860 10:37:13 -- app/version.sh@18 -- # minor=1 00:11:46.860 10:37:13 -- app/version.sh@19 -- # get_header_version patch 00:11:46.860 10:37:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:46.860 10:37:13 -- app/version.sh@14 -- # cut -f2 00:11:46.860 10:37:13 -- app/version.sh@14 -- # tr -d '"' 00:11:46.860 10:37:13 -- app/version.sh@19 -- # patch=1 00:11:46.860 10:37:13 -- app/version.sh@20 -- # get_header_version suffix 00:11:46.860 10:37:13 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:46.860 10:37:13 -- app/version.sh@14 -- # cut -f2 00:11:46.860 10:37:13 -- app/version.sh@14 -- # tr -d '"' 00:11:46.860 10:37:13 -- app/version.sh@20 -- # suffix=-pre 00:11:46.860 10:37:13 -- app/version.sh@22 -- # version=24.1 00:11:46.860 10:37:13 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:46.860 10:37:13 -- app/version.sh@25 -- # version=24.1.1 00:11:46.860 10:37:13 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:46.860 10:37:13 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:46.860 10:37:13 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:47.124 10:37:13 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:47.124 10:37:13 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:47.124 00:11:47.124 real 0m0.129s 00:11:47.124 user 0m0.092s 00:11:47.124 sys 0m0.070s 00:11:47.124 10:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.124 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:11:47.124 ************************************ 00:11:47.124 END TEST version 00:11:47.124 ************************************ 00:11:47.124 10:37:13 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:47.124 10:37:13 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:47.124 10:37:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:47.124 10:37:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.125 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:11:47.125 ************************************ 00:11:47.125 START TEST blockdev_general 00:11:47.125 ************************************ 00:11:47.125 10:37:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:47.125 * Looking for test storage... 00:11:47.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:47.125 10:37:13 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:47.125 10:37:13 -- bdev/nbd_common.sh@6 -- # set -e 00:11:47.125 10:37:13 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:47.125 10:37:13 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:47.125 10:37:13 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:47.125 10:37:13 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:47.125 10:37:13 -- bdev/blockdev.sh@18 -- # : 00:11:47.125 10:37:13 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:47.125 10:37:13 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:47.125 10:37:13 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:47.125 10:37:13 -- bdev/blockdev.sh@672 -- # uname -s 00:11:47.125 10:37:13 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:47.125 10:37:13 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:47.125 10:37:13 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:47.125 10:37:13 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:47.125 10:37:13 -- bdev/blockdev.sh@682 -- # dek= 00:11:47.125 10:37:13 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:47.125 10:37:13 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:47.125 10:37:13 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:47.125 10:37:13 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:47.125 10:37:13 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:47.125 10:37:13 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:47.125 10:37:13 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=119692 00:11:47.125 10:37:13 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:47.125 10:37:13 -- bdev/blockdev.sh@47 -- # waitforlisten 119692 00:11:47.125 10:37:13 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:47.125 10:37:13 -- common/autotest_common.sh@819 -- # '[' -z 119692 ']' 00:11:47.125 10:37:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.125 10:37:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:47.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.125 10:37:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.125 10:37:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:47.125 10:37:13 -- common/autotest_common.sh@10 -- # set +x 00:11:47.125 [2024-07-24 10:37:13.727868] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:47.125 [2024-07-24 10:37:13.728093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119692 ] 00:11:47.382 [2024-07-24 10:37:13.870546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.382 [2024-07-24 10:37:13.957835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.382 [2024-07-24 10:37:13.958126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.314 10:37:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.314 10:37:14 -- common/autotest_common.sh@852 -- # return 0 00:11:48.314 10:37:14 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:48.314 10:37:14 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:48.314 10:37:14 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:48.314 10:37:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.314 10:37:14 -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 [2024-07-24 10:37:14.929112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:48.314 [2024-07-24 10:37:14.929271] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:48.314 00:11:48.314 [2024-07-24 10:37:14.937058] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:48.314 [2024-07-24 10:37:14.937131] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:48.314 00:11:48.314 Malloc0 00:11:48.314 Malloc1 00:11:48.314 Malloc2 00:11:48.571 Malloc3 00:11:48.571 Malloc4 00:11:48.571 Malloc5 00:11:48.571 Malloc6 00:11:48.571 Malloc7 00:11:48.571 Malloc8 00:11:48.571 Malloc9 00:11:48.571 [2024-07-24 10:37:15.120943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:48.571 [2024-07-24 10:37:15.121081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.572 [2024-07-24 10:37:15.121136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:48.572 [2024-07-24 10:37:15.121173] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.572 [2024-07-24 10:37:15.123975] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.572 [2024-07-24 10:37:15.124046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:48.572 TestPT 00:11:48.572 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.572 10:37:15 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:48.572 5000+0 records in 00:11:48.572 5000+0 records out 00:11:48.572 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0270134 s, 379 MB/s 00:11:48.572 10:37:15 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:48.572 10:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.572 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 AIO0 00:11:48.572 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.572 10:37:15 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:48.572 10:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.572 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:48.572 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.572 10:37:15 -- bdev/blockdev.sh@738 -- # cat 00:11:48.572 10:37:15 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:48.572 10:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.572 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.831 10:37:15 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:48.831 10:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.831 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.831 10:37:15 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:48.831 10:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.831 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.831 10:37:15 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:48.831 10:37:15 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:48.831 10:37:15 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:48.831 10:37:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:48.831 10:37:15 -- common/autotest_common.sh@10 -- # set +x 00:11:48.831 10:37:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:48.831 10:37:15 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:48.831 10:37:15 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:48.832 10:37:15 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d6977d22-668c-41f5-8c35-859816c5244e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6977d22-668c-41f5-8c35-859816c5244e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8e8bfec5-33dd-5d89-b756-a1409cab9593"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8e8bfec5-33dd-5d89-b756-a1409cab9593",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "355a8041-5461-5912-b6df-32d913c52c35"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "355a8041-5461-5912-b6df-32d913c52c35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bcd665cc-6316-524c-b96d-fbc52b54a800"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bcd665cc-6316-524c-b96d-fbc52b54a800",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "9037478a-c430-5e4c-ba63-f05395d232f3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9037478a-c430-5e4c-ba63-f05395d232f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "76a608af-2336-56a7-9837-1406fa691046"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "76a608af-2336-56a7-9837-1406fa691046",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c2e465e8-673d-5387-ac90-ee64c349ead8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c2e465e8-673d-5387-ac90-ee64c349ead8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "ae68e692-c0be-5b7b-98cc-3046bd1a9e34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae68e692-c0be-5b7b-98cc-3046bd1a9e34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "1ad47b23-c6ab-54dc-b8a7-6c145a812a88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ad47b23-c6ab-54dc-b8a7-6c145a812a88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0ee9ad56-9d5f-5382-b818-35d031e09f2a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0ee9ad56-9d5f-5382-b818-35d031e09f2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "3f030dc7-69fa-5fcd-aeb0-dfe80b67ab80"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3f030dc7-69fa-5fcd-aeb0-dfe80b67ab80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "303ac22a-fbd5-5cf4-a1ce-79ce49eaa531"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "303ac22a-fbd5-5cf4-a1ce-79ce49eaa531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "ce15e5b2-1808-46a4-8157-8c01c42c9619"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce15e5b2-1808-46a4-8157-8c01c42c9619",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce15e5b2-1808-46a4-8157-8c01c42c9619",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c4f7dd89-5baa-40a2-b68c-b2fb57c26d6d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "21839463-bb07-43b2-ab52-90c8cca2ff8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "88819bed-3f7d-4ab6-9ba7-d06dd67bf543"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "88819bed-3f7d-4ab6-9ba7-d06dd67bf543",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "88819bed-3f7d-4ab6-9ba7-d06dd67bf543",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f6646fac-391a-438a-9a8e-34f4813533d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ef786fc3-b9f3-4d50-8a55-4788aa0d523b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "867d3e01-1a46-45c2-aafc-6b430032e71e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "867d3e01-1a46-45c2-aafc-6b430032e71e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "867d3e01-1a46-45c2-aafc-6b430032e71e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d7657b0a-a85a-4670-acfe-25a5bc79ddb3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "1599c0ea-fd25-4887-886f-1f0c00194b1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "f6a85a9b-b08c-4622-ace3-d970dd8aac9d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "f6a85a9b-b08c-4622-ace3-d970dd8aac9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:48.832 10:37:15 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:48.832 10:37:15 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:48.832 10:37:15 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:48.832 10:37:15 -- bdev/blockdev.sh@752 -- # killprocess 119692 00:11:48.832 10:37:15 -- common/autotest_common.sh@926 -- # '[' -z 119692 ']' 00:11:48.832 10:37:15 -- common/autotest_common.sh@930 -- # kill -0 119692 00:11:48.832 10:37:15 -- common/autotest_common.sh@931 -- # uname 00:11:48.832 10:37:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:48.832 10:37:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119692 00:11:48.832 10:37:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:48.832 killing process with pid 119692 00:11:48.832 10:37:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:48.832 10:37:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119692' 00:11:48.832 10:37:15 -- common/autotest_common.sh@945 -- # kill 119692 00:11:48.832 10:37:15 -- common/autotest_common.sh@950 -- # wait 119692 00:11:49.398 10:37:16 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:49.398 10:37:16 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:49.398 10:37:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:49.398 10:37:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:49.398 10:37:16 -- common/autotest_common.sh@10 -- # set +x 00:11:49.398 ************************************ 00:11:49.398 START TEST bdev_hello_world 00:11:49.398 ************************************ 00:11:49.398 10:37:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:49.656 [2024-07-24 10:37:16.103474] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:49.656 [2024-07-24 10:37:16.103759] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119751 ] 00:11:49.656 [2024-07-24 10:37:16.253205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.914 [2024-07-24 10:37:16.342301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.914 [2024-07-24 10:37:16.488473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:49.914 [2024-07-24 10:37:16.488594] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:49.914 [2024-07-24 10:37:16.496375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:49.914 [2024-07-24 10:37:16.496486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:49.914 [2024-07-24 10:37:16.504431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:49.914 [2024-07-24 10:37:16.504516] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:49.914 [2024-07-24 10:37:16.504563] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:50.171 [2024-07-24 10:37:16.603480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:50.171 [2024-07-24 10:37:16.603660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.171 [2024-07-24 10:37:16.603742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:50.171 [2024-07-24 10:37:16.603784] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.171 [2024-07-24 10:37:16.606528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.171 [2024-07-24 10:37:16.606609] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:50.171 [2024-07-24 10:37:16.776636] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:50.171 [2024-07-24 10:37:16.776729] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:50.171 [2024-07-24 10:37:16.776825] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:50.171 [2024-07-24 10:37:16.776908] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:50.171 [2024-07-24 10:37:16.777001] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:50.171 [2024-07-24 10:37:16.777039] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:50.171 [2024-07-24 10:37:16.777103] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:50.172 00:11:50.172 [2024-07-24 10:37:16.777157] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:50.737 00:11:50.737 real 0m1.125s 00:11:50.737 user 0m0.648s 00:11:50.737 sys 0m0.329s 00:11:50.737 10:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.737 10:37:17 -- common/autotest_common.sh@10 -- # set +x 00:11:50.737 ************************************ 00:11:50.737 END TEST bdev_hello_world 00:11:50.737 ************************************ 00:11:50.737 10:37:17 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:50.737 10:37:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:50.737 10:37:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.737 10:37:17 -- common/autotest_common.sh@10 -- # set +x 00:11:50.737 ************************************ 00:11:50.737 START TEST bdev_bounds 00:11:50.737 ************************************ 00:11:50.737 10:37:17 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:11:50.737 10:37:17 -- bdev/blockdev.sh@288 -- # bdevio_pid=119789 00:11:50.737 10:37:17 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:50.737 Process bdevio pid: 119789 00:11:50.737 10:37:17 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 119789' 00:11:50.737 10:37:17 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:50.737 10:37:17 -- bdev/blockdev.sh@291 -- # waitforlisten 119789 00:11:50.737 10:37:17 -- common/autotest_common.sh@819 -- # '[' -z 119789 ']' 00:11:50.737 10:37:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.737 10:37:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:50.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.737 10:37:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.737 10:37:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:50.737 10:37:17 -- common/autotest_common.sh@10 -- # set +x 00:11:50.737 [2024-07-24 10:37:17.275311] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:50.737 [2024-07-24 10:37:17.275568] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119789 ] 00:11:50.995 [2024-07-24 10:37:17.430439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.995 [2024-07-24 10:37:17.522842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.995 [2024-07-24 10:37:17.522994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.995 [2024-07-24 10:37:17.523339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.995 [2024-07-24 10:37:17.667721] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:50.995 [2024-07-24 10:37:17.667831] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:50.995 [2024-07-24 10:37:17.675592] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.254 [2024-07-24 10:37:17.675680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.255 [2024-07-24 10:37:17.683646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.255 [2024-07-24 10:37:17.683756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:51.255 [2024-07-24 10:37:17.683812] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:51.255 [2024-07-24 10:37:17.778724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.255 [2024-07-24 10:37:17.778839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.255 [2024-07-24 10:37:17.778946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:51.255 [2024-07-24 10:37:17.778987] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.255 [2024-07-24 10:37:17.781956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.255 [2024-07-24 10:37:17.782020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:51.821 10:37:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:51.821 10:37:18 -- common/autotest_common.sh@852 -- # return 0 00:11:51.821 10:37:18 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:51.821 I/O targets: 00:11:51.821 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:51.821 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:51.821 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:51.821 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:51.821 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:51.821 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:51.821 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:51.821 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:51.821 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:51.822 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:51.822 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:51.822 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:51.822 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:51.822 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:51.822 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:51.822 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:51.822 00:11:51.822 00:11:51.822 CUnit - A unit testing framework for C - Version 2.1-3 00:11:51.822 http://cunit.sourceforge.net/ 00:11:51.822 00:11:51.822 00:11:51.822 Suite: bdevio tests on: AIO0 00:11:51.822 Test: blockdev write read block ...passed 00:11:51.822 Test: blockdev write zeroes read block ...passed 00:11:51.822 Test: blockdev write zeroes read no split ...passed 00:11:51.822 Test: blockdev write zeroes read split ...passed 00:11:51.822 Test: blockdev write zeroes read split partial ...passed 00:11:51.822 Test: blockdev reset ...passed 00:11:51.822 Test: blockdev write read 8 blocks ...passed 00:11:51.822 Test: blockdev write read size > 128k ...passed 00:11:51.822 Test: blockdev write read invalid size ...passed 00:11:51.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:51.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:51.822 Test: blockdev write read max offset ...passed 00:11:51.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:51.822 Test: blockdev writev readv 8 blocks ...passed 00:11:51.822 Test: blockdev writev readv 30 x 1block ...passed 00:11:51.822 Test: blockdev writev readv block ...passed 00:11:51.822 Test: blockdev writev readv size > 128k ...passed 00:11:51.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:51.822 Test: blockdev comparev and writev ...passed 00:11:51.822 Test: blockdev nvme passthru rw ...passed 00:11:51.822 Test: blockdev nvme passthru vendor specific ...passed 00:11:51.822 Test: blockdev nvme admin passthru ...passed 00:11:51.822 Test: blockdev copy ...passed 00:11:51.822 Suite: bdevio tests on: raid1 00:11:51.822 Test: blockdev write read block ...passed 00:11:51.822 Test: blockdev write zeroes read block ...passed 00:11:51.822 Test: blockdev write zeroes read no split ...passed 00:11:51.822 Test: blockdev write zeroes read split ...passed 00:11:51.822 Test: blockdev write zeroes read split partial ...passed 00:11:51.822 Test: blockdev reset ...passed 00:11:51.822 Test: blockdev write read 8 blocks ...passed 00:11:51.822 Test: blockdev write read size > 128k ...passed 00:11:51.822 Test: blockdev write read invalid size ...passed 00:11:51.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:51.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:51.822 Test: blockdev write read max offset ...passed 00:11:51.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:51.822 Test: blockdev writev readv 8 blocks ...passed 00:11:51.822 Test: blockdev writev readv 30 x 1block ...passed 00:11:51.822 Test: blockdev writev readv block ...passed 00:11:51.822 Test: blockdev writev readv size > 128k ...passed 00:11:51.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:51.822 Test: blockdev comparev and writev ...passed 00:11:51.822 Test: blockdev nvme passthru rw ...passed 00:11:51.822 Test: blockdev nvme passthru vendor specific ...passed 00:11:51.822 Test: blockdev nvme admin passthru ...passed 00:11:51.822 Test: blockdev copy ...passed 00:11:51.822 Suite: bdevio tests on: concat0 00:11:51.822 Test: blockdev write read block ...passed 00:11:51.822 Test: blockdev write zeroes read block ...passed 00:11:51.822 Test: blockdev write zeroes read no split ...passed 00:11:51.822 Test: blockdev write zeroes read split ...passed 00:11:51.822 Test: blockdev write zeroes read split partial ...passed 00:11:51.822 Test: blockdev reset ...passed 00:11:51.822 Test: blockdev write read 8 blocks ...passed 00:11:51.822 Test: blockdev write read size > 128k ...passed 00:11:51.822 Test: blockdev write read invalid size ...passed 00:11:51.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:51.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:51.822 Test: blockdev write read max offset ...passed 00:11:51.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:51.822 Test: blockdev writev readv 8 blocks ...passed 00:11:51.822 Test: blockdev writev readv 30 x 1block ...passed 00:11:51.822 Test: blockdev writev readv block ...passed 00:11:51.822 Test: blockdev writev readv size > 128k ...passed 00:11:51.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:51.822 Test: blockdev comparev and writev ...passed 00:11:51.822 Test: blockdev nvme passthru rw ...passed 00:11:51.822 Test: blockdev nvme passthru vendor specific ...passed 00:11:51.822 Test: blockdev nvme admin passthru ...passed 00:11:51.822 Test: blockdev copy ...passed 00:11:51.822 Suite: bdevio tests on: raid0 00:11:51.822 Test: blockdev write read block ...passed 00:11:51.822 Test: blockdev write zeroes read block ...passed 00:11:51.822 Test: blockdev write zeroes read no split ...passed 00:11:51.822 Test: blockdev write zeroes read split ...passed 00:11:51.822 Test: blockdev write zeroes read split partial ...passed 00:11:51.822 Test: blockdev reset ...passed 00:11:51.822 Test: blockdev write read 8 blocks ...passed 00:11:51.822 Test: blockdev write read size > 128k ...passed 00:11:51.822 Test: blockdev write read invalid size ...passed 00:11:51.822 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:51.822 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:51.822 Test: blockdev write read max offset ...passed 00:11:51.822 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:51.822 Test: blockdev writev readv 8 blocks ...passed 00:11:51.822 Test: blockdev writev readv 30 x 1block ...passed 00:11:51.822 Test: blockdev writev readv block ...passed 00:11:51.822 Test: blockdev writev readv size > 128k ...passed 00:11:51.822 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:51.822 Test: blockdev comparev and writev ...passed 00:11:51.822 Test: blockdev nvme passthru rw ...passed 00:11:51.822 Test: blockdev nvme passthru vendor specific ...passed 00:11:51.822 Test: blockdev nvme admin passthru ...passed 00:11:51.822 Test: blockdev copy ...passed 00:11:51.822 Suite: bdevio tests on: TestPT 00:11:51.822 Test: blockdev write read block ...passed 00:11:51.822 Test: blockdev write zeroes read block ...passed 00:11:51.822 Test: blockdev write zeroes read no split ...passed 00:11:51.822 Test: blockdev write zeroes read split ...passed 00:11:51.822 Test: blockdev write zeroes read split partial ...passed 00:11:51.822 Test: blockdev reset ...passed 00:11:52.082 Test: blockdev write read 8 blocks ...passed 00:11:52.082 Test: blockdev write read size > 128k ...passed 00:11:52.082 Test: blockdev write read invalid size ...passed 00:11:52.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.082 Test: blockdev write read max offset ...passed 00:11:52.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.082 Test: blockdev writev readv 8 blocks ...passed 00:11:52.082 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.082 Test: blockdev writev readv block ...passed 00:11:52.082 Test: blockdev writev readv size > 128k ...passed 00:11:52.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.082 Test: blockdev comparev and writev ...passed 00:11:52.082 Test: blockdev nvme passthru rw ...passed 00:11:52.082 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.082 Test: blockdev nvme admin passthru ...passed 00:11:52.082 Test: blockdev copy ...passed 00:11:52.082 Suite: bdevio tests on: Malloc2p7 00:11:52.082 Test: blockdev write read block ...passed 00:11:52.082 Test: blockdev write zeroes read block ...passed 00:11:52.082 Test: blockdev write zeroes read no split ...passed 00:11:52.082 Test: blockdev write zeroes read split ...passed 00:11:52.082 Test: blockdev write zeroes read split partial ...passed 00:11:52.082 Test: blockdev reset ...passed 00:11:52.082 Test: blockdev write read 8 blocks ...passed 00:11:52.082 Test: blockdev write read size > 128k ...passed 00:11:52.082 Test: blockdev write read invalid size ...passed 00:11:52.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.082 Test: blockdev write read max offset ...passed 00:11:52.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.082 Test: blockdev writev readv 8 blocks ...passed 00:11:52.082 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.082 Test: blockdev writev readv block ...passed 00:11:52.082 Test: blockdev writev readv size > 128k ...passed 00:11:52.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.082 Test: blockdev comparev and writev ...passed 00:11:52.082 Test: blockdev nvme passthru rw ...passed 00:11:52.082 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.082 Test: blockdev nvme admin passthru ...passed 00:11:52.082 Test: blockdev copy ...passed 00:11:52.082 Suite: bdevio tests on: Malloc2p6 00:11:52.082 Test: blockdev write read block ...passed 00:11:52.082 Test: blockdev write zeroes read block ...passed 00:11:52.082 Test: blockdev write zeroes read no split ...passed 00:11:52.082 Test: blockdev write zeroes read split ...passed 00:11:52.082 Test: blockdev write zeroes read split partial ...passed 00:11:52.082 Test: blockdev reset ...passed 00:11:52.082 Test: blockdev write read 8 blocks ...passed 00:11:52.082 Test: blockdev write read size > 128k ...passed 00:11:52.082 Test: blockdev write read invalid size ...passed 00:11:52.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.082 Test: blockdev write read max offset ...passed 00:11:52.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.082 Test: blockdev writev readv 8 blocks ...passed 00:11:52.082 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.082 Test: blockdev writev readv block ...passed 00:11:52.082 Test: blockdev writev readv size > 128k ...passed 00:11:52.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.082 Test: blockdev comparev and writev ...passed 00:11:52.082 Test: blockdev nvme passthru rw ...passed 00:11:52.082 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.082 Test: blockdev nvme admin passthru ...passed 00:11:52.082 Test: blockdev copy ...passed 00:11:52.082 Suite: bdevio tests on: Malloc2p5 00:11:52.082 Test: blockdev write read block ...passed 00:11:52.082 Test: blockdev write zeroes read block ...passed 00:11:52.082 Test: blockdev write zeroes read no split ...passed 00:11:52.082 Test: blockdev write zeroes read split ...passed 00:11:52.082 Test: blockdev write zeroes read split partial ...passed 00:11:52.082 Test: blockdev reset ...passed 00:11:52.082 Test: blockdev write read 8 blocks ...passed 00:11:52.082 Test: blockdev write read size > 128k ...passed 00:11:52.082 Test: blockdev write read invalid size ...passed 00:11:52.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.082 Test: blockdev write read max offset ...passed 00:11:52.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.082 Test: blockdev writev readv 8 blocks ...passed 00:11:52.082 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.082 Test: blockdev writev readv block ...passed 00:11:52.082 Test: blockdev writev readv size > 128k ...passed 00:11:52.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.082 Test: blockdev comparev and writev ...passed 00:11:52.082 Test: blockdev nvme passthru rw ...passed 00:11:52.082 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.082 Test: blockdev nvme admin passthru ...passed 00:11:52.082 Test: blockdev copy ...passed 00:11:52.082 Suite: bdevio tests on: Malloc2p4 00:11:52.082 Test: blockdev write read block ...passed 00:11:52.082 Test: blockdev write zeroes read block ...passed 00:11:52.082 Test: blockdev write zeroes read no split ...passed 00:11:52.082 Test: blockdev write zeroes read split ...passed 00:11:52.082 Test: blockdev write zeroes read split partial ...passed 00:11:52.082 Test: blockdev reset ...passed 00:11:52.082 Test: blockdev write read 8 blocks ...passed 00:11:52.082 Test: blockdev write read size > 128k ...passed 00:11:52.082 Test: blockdev write read invalid size ...passed 00:11:52.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.082 Test: blockdev write read max offset ...passed 00:11:52.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.082 Test: blockdev writev readv 8 blocks ...passed 00:11:52.082 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.082 Test: blockdev writev readv block ...passed 00:11:52.082 Test: blockdev writev readv size > 128k ...passed 00:11:52.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.082 Test: blockdev comparev and writev ...passed 00:11:52.082 Test: blockdev nvme passthru rw ...passed 00:11:52.082 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.082 Test: blockdev nvme admin passthru ...passed 00:11:52.082 Test: blockdev copy ...passed 00:11:52.082 Suite: bdevio tests on: Malloc2p3 00:11:52.082 Test: blockdev write read block ...passed 00:11:52.082 Test: blockdev write zeroes read block ...passed 00:11:52.082 Test: blockdev write zeroes read no split ...passed 00:11:52.082 Test: blockdev write zeroes read split ...passed 00:11:52.082 Test: blockdev write zeroes read split partial ...passed 00:11:52.082 Test: blockdev reset ...passed 00:11:52.082 Test: blockdev write read 8 blocks ...passed 00:11:52.082 Test: blockdev write read size > 128k ...passed 00:11:52.082 Test: blockdev write read invalid size ...passed 00:11:52.082 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.082 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.082 Test: blockdev write read max offset ...passed 00:11:52.082 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.082 Test: blockdev writev readv 8 blocks ...passed 00:11:52.082 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.082 Test: blockdev writev readv block ...passed 00:11:52.082 Test: blockdev writev readv size > 128k ...passed 00:11:52.082 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.082 Test: blockdev comparev and writev ...passed 00:11:52.082 Test: blockdev nvme passthru rw ...passed 00:11:52.082 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.082 Test: blockdev nvme admin passthru ...passed 00:11:52.082 Test: blockdev copy ...passed 00:11:52.082 Suite: bdevio tests on: Malloc2p2 00:11:52.082 Test: blockdev write read block ...passed 00:11:52.082 Test: blockdev write zeroes read block ...passed 00:11:52.082 Test: blockdev write zeroes read no split ...passed 00:11:52.083 Test: blockdev write zeroes read split ...passed 00:11:52.083 Test: blockdev write zeroes read split partial ...passed 00:11:52.083 Test: blockdev reset ...passed 00:11:52.083 Test: blockdev write read 8 blocks ...passed 00:11:52.083 Test: blockdev write read size > 128k ...passed 00:11:52.083 Test: blockdev write read invalid size ...passed 00:11:52.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.083 Test: blockdev write read max offset ...passed 00:11:52.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.083 Test: blockdev writev readv 8 blocks ...passed 00:11:52.083 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.083 Test: blockdev writev readv block ...passed 00:11:52.083 Test: blockdev writev readv size > 128k ...passed 00:11:52.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.083 Test: blockdev comparev and writev ...passed 00:11:52.083 Test: blockdev nvme passthru rw ...passed 00:11:52.083 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.083 Test: blockdev nvme admin passthru ...passed 00:11:52.083 Test: blockdev copy ...passed 00:11:52.083 Suite: bdevio tests on: Malloc2p1 00:11:52.083 Test: blockdev write read block ...passed 00:11:52.083 Test: blockdev write zeroes read block ...passed 00:11:52.083 Test: blockdev write zeroes read no split ...passed 00:11:52.083 Test: blockdev write zeroes read split ...passed 00:11:52.083 Test: blockdev write zeroes read split partial ...passed 00:11:52.083 Test: blockdev reset ...passed 00:11:52.083 Test: blockdev write read 8 blocks ...passed 00:11:52.083 Test: blockdev write read size > 128k ...passed 00:11:52.083 Test: blockdev write read invalid size ...passed 00:11:52.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.083 Test: blockdev write read max offset ...passed 00:11:52.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.083 Test: blockdev writev readv 8 blocks ...passed 00:11:52.083 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.083 Test: blockdev writev readv block ...passed 00:11:52.083 Test: blockdev writev readv size > 128k ...passed 00:11:52.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.083 Test: blockdev comparev and writev ...passed 00:11:52.083 Test: blockdev nvme passthru rw ...passed 00:11:52.083 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.083 Test: blockdev nvme admin passthru ...passed 00:11:52.083 Test: blockdev copy ...passed 00:11:52.083 Suite: bdevio tests on: Malloc2p0 00:11:52.083 Test: blockdev write read block ...passed 00:11:52.083 Test: blockdev write zeroes read block ...passed 00:11:52.083 Test: blockdev write zeroes read no split ...passed 00:11:52.083 Test: blockdev write zeroes read split ...passed 00:11:52.083 Test: blockdev write zeroes read split partial ...passed 00:11:52.083 Test: blockdev reset ...passed 00:11:52.083 Test: blockdev write read 8 blocks ...passed 00:11:52.083 Test: blockdev write read size > 128k ...passed 00:11:52.083 Test: blockdev write read invalid size ...passed 00:11:52.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.083 Test: blockdev write read max offset ...passed 00:11:52.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.083 Test: blockdev writev readv 8 blocks ...passed 00:11:52.083 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.083 Test: blockdev writev readv block ...passed 00:11:52.083 Test: blockdev writev readv size > 128k ...passed 00:11:52.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.083 Test: blockdev comparev and writev ...passed 00:11:52.083 Test: blockdev nvme passthru rw ...passed 00:11:52.083 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.083 Test: blockdev nvme admin passthru ...passed 00:11:52.083 Test: blockdev copy ...passed 00:11:52.083 Suite: bdevio tests on: Malloc1p1 00:11:52.083 Test: blockdev write read block ...passed 00:11:52.083 Test: blockdev write zeroes read block ...passed 00:11:52.083 Test: blockdev write zeroes read no split ...passed 00:11:52.083 Test: blockdev write zeroes read split ...passed 00:11:52.083 Test: blockdev write zeroes read split partial ...passed 00:11:52.083 Test: blockdev reset ...passed 00:11:52.083 Test: blockdev write read 8 blocks ...passed 00:11:52.083 Test: blockdev write read size > 128k ...passed 00:11:52.083 Test: blockdev write read invalid size ...passed 00:11:52.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.083 Test: blockdev write read max offset ...passed 00:11:52.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.083 Test: blockdev writev readv 8 blocks ...passed 00:11:52.083 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.083 Test: blockdev writev readv block ...passed 00:11:52.083 Test: blockdev writev readv size > 128k ...passed 00:11:52.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.083 Test: blockdev comparev and writev ...passed 00:11:52.083 Test: blockdev nvme passthru rw ...passed 00:11:52.083 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.083 Test: blockdev nvme admin passthru ...passed 00:11:52.083 Test: blockdev copy ...passed 00:11:52.083 Suite: bdevio tests on: Malloc1p0 00:11:52.083 Test: blockdev write read block ...passed 00:11:52.083 Test: blockdev write zeroes read block ...passed 00:11:52.083 Test: blockdev write zeroes read no split ...passed 00:11:52.083 Test: blockdev write zeroes read split ...passed 00:11:52.083 Test: blockdev write zeroes read split partial ...passed 00:11:52.083 Test: blockdev reset ...passed 00:11:52.083 Test: blockdev write read 8 blocks ...passed 00:11:52.083 Test: blockdev write read size > 128k ...passed 00:11:52.083 Test: blockdev write read invalid size ...passed 00:11:52.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.083 Test: blockdev write read max offset ...passed 00:11:52.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.083 Test: blockdev writev readv 8 blocks ...passed 00:11:52.083 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.083 Test: blockdev writev readv block ...passed 00:11:52.083 Test: blockdev writev readv size > 128k ...passed 00:11:52.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.083 Test: blockdev comparev and writev ...passed 00:11:52.083 Test: blockdev nvme passthru rw ...passed 00:11:52.083 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.083 Test: blockdev nvme admin passthru ...passed 00:11:52.083 Test: blockdev copy ...passed 00:11:52.083 Suite: bdevio tests on: Malloc0 00:11:52.083 Test: blockdev write read block ...passed 00:11:52.083 Test: blockdev write zeroes read block ...passed 00:11:52.083 Test: blockdev write zeroes read no split ...passed 00:11:52.083 Test: blockdev write zeroes read split ...passed 00:11:52.083 Test: blockdev write zeroes read split partial ...passed 00:11:52.083 Test: blockdev reset ...passed 00:11:52.083 Test: blockdev write read 8 blocks ...passed 00:11:52.083 Test: blockdev write read size > 128k ...passed 00:11:52.083 Test: blockdev write read invalid size ...passed 00:11:52.083 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:52.083 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:52.083 Test: blockdev write read max offset ...passed 00:11:52.083 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:52.083 Test: blockdev writev readv 8 blocks ...passed 00:11:52.083 Test: blockdev writev readv 30 x 1block ...passed 00:11:52.083 Test: blockdev writev readv block ...passed 00:11:52.083 Test: blockdev writev readv size > 128k ...passed 00:11:52.083 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:52.083 Test: blockdev comparev and writev ...passed 00:11:52.083 Test: blockdev nvme passthru rw ...passed 00:11:52.083 Test: blockdev nvme passthru vendor specific ...passed 00:11:52.083 Test: blockdev nvme admin passthru ...passed 00:11:52.083 Test: blockdev copy ...passed 00:11:52.083 00:11:52.083 Run Summary: Type Total Ran Passed Failed Inactive 00:11:52.083 suites 16 16 n/a 0 0 00:11:52.083 tests 368 368 368 0 0 00:11:52.083 asserts 2224 2224 2224 0 n/a 00:11:52.083 00:11:52.083 Elapsed time = 0.715 seconds 00:11:52.083 0 00:11:52.083 10:37:18 -- bdev/blockdev.sh@293 -- # killprocess 119789 00:11:52.083 10:37:18 -- common/autotest_common.sh@926 -- # '[' -z 119789 ']' 00:11:52.083 10:37:18 -- common/autotest_common.sh@930 -- # kill -0 119789 00:11:52.083 10:37:18 -- common/autotest_common.sh@931 -- # uname 00:11:52.083 10:37:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:52.083 10:37:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119789 00:11:52.083 10:37:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:52.083 10:37:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:52.083 10:37:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119789' 00:11:52.083 killing process with pid 119789 00:11:52.083 10:37:18 -- common/autotest_common.sh@945 -- # kill 119789 00:11:52.083 10:37:18 -- common/autotest_common.sh@950 -- # wait 119789 00:11:52.649 10:37:19 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:52.649 00:11:52.649 real 0m1.858s 00:11:52.649 user 0m4.452s 00:11:52.649 sys 0m0.455s 00:11:52.649 10:37:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.649 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:11:52.649 ************************************ 00:11:52.649 END TEST bdev_bounds 00:11:52.649 ************************************ 00:11:52.649 10:37:19 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:52.649 10:37:19 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:52.649 10:37:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:52.649 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:11:52.649 ************************************ 00:11:52.649 START TEST bdev_nbd 00:11:52.649 ************************************ 00:11:52.649 10:37:19 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:52.649 10:37:19 -- bdev/blockdev.sh@298 -- # uname -s 00:11:52.649 10:37:19 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:52.649 10:37:19 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:52.649 10:37:19 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:52.649 10:37:19 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:52.649 10:37:19 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:52.649 10:37:19 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:52.649 10:37:19 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:52.649 10:37:19 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:52.649 10:37:19 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:52.649 10:37:19 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:52.649 10:37:19 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:52.649 10:37:19 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:52.649 10:37:19 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:52.649 10:37:19 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:52.649 10:37:19 -- bdev/blockdev.sh@316 -- # nbd_pid=119849 00:11:52.649 10:37:19 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:52.649 10:37:19 -- bdev/blockdev.sh@318 -- # waitforlisten 119849 /var/tmp/spdk-nbd.sock 00:11:52.649 10:37:19 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:52.649 10:37:19 -- common/autotest_common.sh@819 -- # '[' -z 119849 ']' 00:11:52.649 10:37:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:52.649 10:37:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:52.649 10:37:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:52.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:52.650 10:37:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:52.650 10:37:19 -- common/autotest_common.sh@10 -- # set +x 00:11:52.650 [2024-07-24 10:37:19.192883] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:11:52.650 [2024-07-24 10:37:19.193129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.907 [2024-07-24 10:37:19.340602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.907 [2024-07-24 10:37:19.427370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.907 [2024-07-24 10:37:19.574880] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.907 [2024-07-24 10:37:19.575028] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.907 [2024-07-24 10:37:19.582837] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.907 [2024-07-24 10:37:19.582936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:53.165 [2024-07-24 10:37:19.590851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:53.165 [2024-07-24 10:37:19.590936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:53.165 [2024-07-24 10:37:19.590974] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:53.165 [2024-07-24 10:37:19.697204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:53.165 [2024-07-24 10:37:19.697377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.165 [2024-07-24 10:37:19.697450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:53.165 [2024-07-24 10:37:19.697490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.165 [2024-07-24 10:37:19.700375] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.165 [2024-07-24 10:37:19.700461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:53.733 10:37:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:53.733 10:37:20 -- common/autotest_common.sh@852 -- # return 0 00:11:53.733 10:37:20 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@24 -- # local i 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:53.733 10:37:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:53.994 10:37:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:53.994 10:37:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:53.994 10:37:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:53.994 10:37:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:11:53.994 10:37:20 -- common/autotest_common.sh@857 -- # local i 00:11:53.994 10:37:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:53.994 10:37:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:53.994 10:37:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:11:53.994 10:37:20 -- common/autotest_common.sh@861 -- # break 00:11:53.994 10:37:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:53.994 10:37:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:53.994 10:37:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.994 1+0 records in 00:11:53.994 1+0 records out 00:11:53.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224118 s, 18.3 MB/s 00:11:53.994 10:37:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.994 10:37:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:53.994 10:37:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.994 10:37:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:53.994 10:37:20 -- common/autotest_common.sh@877 -- # return 0 00:11:53.994 10:37:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:53.994 10:37:20 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:53.994 10:37:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:54.253 10:37:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:54.253 10:37:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:54.253 10:37:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:54.253 10:37:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:11:54.253 10:37:20 -- common/autotest_common.sh@857 -- # local i 00:11:54.253 10:37:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.253 10:37:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.253 10:37:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:11:54.253 10:37:20 -- common/autotest_common.sh@861 -- # break 00:11:54.253 10:37:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.253 10:37:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.254 10:37:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.254 1+0 records in 00:11:54.254 1+0 records out 00:11:54.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351553 s, 11.7 MB/s 00:11:54.254 10:37:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.254 10:37:20 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.254 10:37:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.254 10:37:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.254 10:37:20 -- common/autotest_common.sh@877 -- # return 0 00:11:54.254 10:37:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:54.254 10:37:20 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.254 10:37:20 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:54.511 10:37:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:54.511 10:37:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:54.511 10:37:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:54.511 10:37:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:11:54.511 10:37:21 -- common/autotest_common.sh@857 -- # local i 00:11:54.511 10:37:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.511 10:37:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.511 10:37:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:11:54.511 10:37:21 -- common/autotest_common.sh@861 -- # break 00:11:54.511 10:37:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.511 10:37:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.511 10:37:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.511 1+0 records in 00:11:54.511 1+0 records out 00:11:54.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326426 s, 12.5 MB/s 00:11:54.511 10:37:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.511 10:37:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.511 10:37:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.511 10:37:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.511 10:37:21 -- common/autotest_common.sh@877 -- # return 0 00:11:54.511 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:54.511 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.511 10:37:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:54.769 10:37:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:54.769 10:37:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:54.769 10:37:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:54.769 10:37:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:11:54.769 10:37:21 -- common/autotest_common.sh@857 -- # local i 00:11:54.769 10:37:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:54.769 10:37:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:54.769 10:37:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:11:54.769 10:37:21 -- common/autotest_common.sh@861 -- # break 00:11:54.769 10:37:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:54.769 10:37:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:54.769 10:37:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.769 1+0 records in 00:11:54.769 1+0 records out 00:11:54.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369368 s, 11.1 MB/s 00:11:54.769 10:37:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.769 10:37:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:54.769 10:37:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.769 10:37:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:54.769 10:37:21 -- common/autotest_common.sh@877 -- # return 0 00:11:54.769 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:54.769 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:54.769 10:37:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:55.028 10:37:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:55.028 10:37:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:55.028 10:37:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:55.028 10:37:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:11:55.028 10:37:21 -- common/autotest_common.sh@857 -- # local i 00:11:55.028 10:37:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.028 10:37:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.028 10:37:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:11:55.028 10:37:21 -- common/autotest_common.sh@861 -- # break 00:11:55.028 10:37:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.028 10:37:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.028 10:37:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.028 1+0 records in 00:11:55.028 1+0 records out 00:11:55.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044337 s, 9.2 MB/s 00:11:55.028 10:37:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.028 10:37:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.028 10:37:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.028 10:37:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.028 10:37:21 -- common/autotest_common.sh@877 -- # return 0 00:11:55.028 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.028 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.028 10:37:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:55.287 10:37:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:55.287 10:37:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:55.287 10:37:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:55.287 10:37:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:11:55.287 10:37:21 -- common/autotest_common.sh@857 -- # local i 00:11:55.287 10:37:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.287 10:37:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.287 10:37:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:11:55.287 10:37:21 -- common/autotest_common.sh@861 -- # break 00:11:55.287 10:37:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.287 10:37:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.287 10:37:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.287 1+0 records in 00:11:55.287 1+0 records out 00:11:55.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418621 s, 9.8 MB/s 00:11:55.287 10:37:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.287 10:37:21 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.287 10:37:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.287 10:37:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.287 10:37:21 -- common/autotest_common.sh@877 -- # return 0 00:11:55.287 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.287 10:37:21 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.287 10:37:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:55.545 10:37:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:55.545 10:37:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:55.545 10:37:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:55.545 10:37:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:11:55.545 10:37:22 -- common/autotest_common.sh@857 -- # local i 00:11:55.545 10:37:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.545 10:37:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.545 10:37:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:11:55.545 10:37:22 -- common/autotest_common.sh@861 -- # break 00:11:55.545 10:37:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.545 10:37:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.545 10:37:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.545 1+0 records in 00:11:55.545 1+0 records out 00:11:55.545 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439137 s, 9.3 MB/s 00:11:55.545 10:37:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.545 10:37:22 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.545 10:37:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.545 10:37:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.545 10:37:22 -- common/autotest_common.sh@877 -- # return 0 00:11:55.545 10:37:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.545 10:37:22 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.545 10:37:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:55.803 10:37:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:55.803 10:37:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:55.803 10:37:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:55.803 10:37:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:11:55.803 10:37:22 -- common/autotest_common.sh@857 -- # local i 00:11:55.803 10:37:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:55.803 10:37:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:55.803 10:37:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:11:55.803 10:37:22 -- common/autotest_common.sh@861 -- # break 00:11:55.803 10:37:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:55.803 10:37:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:55.803 10:37:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.803 1+0 records in 00:11:55.803 1+0 records out 00:11:55.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567362 s, 7.2 MB/s 00:11:55.803 10:37:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.803 10:37:22 -- common/autotest_common.sh@874 -- # size=4096 00:11:55.803 10:37:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.803 10:37:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:55.803 10:37:22 -- common/autotest_common.sh@877 -- # return 0 00:11:55.803 10:37:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.803 10:37:22 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.803 10:37:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:56.061 10:37:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:56.061 10:37:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:56.061 10:37:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:56.061 10:37:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:11:56.061 10:37:22 -- common/autotest_common.sh@857 -- # local i 00:11:56.061 10:37:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.061 10:37:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.061 10:37:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:11:56.061 10:37:22 -- common/autotest_common.sh@861 -- # break 00:11:56.061 10:37:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.061 10:37:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.061 10:37:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.061 1+0 records in 00:11:56.061 1+0 records out 00:11:56.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448746 s, 9.1 MB/s 00:11:56.061 10:37:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.061 10:37:22 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.061 10:37:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.061 10:37:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.061 10:37:22 -- common/autotest_common.sh@877 -- # return 0 00:11:56.061 10:37:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.061 10:37:22 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.061 10:37:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:56.318 10:37:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:56.318 10:37:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:56.318 10:37:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:56.318 10:37:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:11:56.318 10:37:22 -- common/autotest_common.sh@857 -- # local i 00:11:56.318 10:37:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.318 10:37:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.318 10:37:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:11:56.318 10:37:22 -- common/autotest_common.sh@861 -- # break 00:11:56.318 10:37:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.318 10:37:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.318 10:37:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.318 1+0 records in 00:11:56.318 1+0 records out 00:11:56.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759356 s, 5.4 MB/s 00:11:56.318 10:37:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.576 10:37:22 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.576 10:37:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.576 10:37:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.576 10:37:23 -- common/autotest_common.sh@877 -- # return 0 00:11:56.576 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.576 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.576 10:37:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:56.576 10:37:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:56.576 10:37:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:56.576 10:37:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:56.576 10:37:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:11:56.576 10:37:23 -- common/autotest_common.sh@857 -- # local i 00:11:56.576 10:37:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:56.576 10:37:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:56.576 10:37:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:11:56.576 10:37:23 -- common/autotest_common.sh@861 -- # break 00:11:56.576 10:37:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:56.576 10:37:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:56.576 10:37:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.576 1+0 records in 00:11:56.576 1+0 records out 00:11:56.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781318 s, 5.2 MB/s 00:11:56.576 10:37:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.834 10:37:23 -- common/autotest_common.sh@874 -- # size=4096 00:11:56.834 10:37:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.834 10:37:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:56.834 10:37:23 -- common/autotest_common.sh@877 -- # return 0 00:11:56.834 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.834 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.834 10:37:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:56.834 10:37:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:56.834 10:37:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:57.092 10:37:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:57.092 10:37:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:11:57.092 10:37:23 -- common/autotest_common.sh@857 -- # local i 00:11:57.092 10:37:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:57.092 10:37:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:57.092 10:37:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:11:57.092 10:37:23 -- common/autotest_common.sh@861 -- # break 00:11:57.092 10:37:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:57.092 10:37:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:57.092 10:37:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.092 1+0 records in 00:11:57.092 1+0 records out 00:11:57.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000745875 s, 5.5 MB/s 00:11:57.092 10:37:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.092 10:37:23 -- common/autotest_common.sh@874 -- # size=4096 00:11:57.092 10:37:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.092 10:37:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:57.092 10:37:23 -- common/autotest_common.sh@877 -- # return 0 00:11:57.092 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.092 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.092 10:37:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:57.349 10:37:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:57.349 10:37:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:57.349 10:37:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:57.349 10:37:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:11:57.349 10:37:23 -- common/autotest_common.sh@857 -- # local i 00:11:57.349 10:37:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:57.349 10:37:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:57.349 10:37:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:11:57.349 10:37:23 -- common/autotest_common.sh@861 -- # break 00:11:57.349 10:37:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:57.349 10:37:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:57.349 10:37:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.349 1+0 records in 00:11:57.349 1+0 records out 00:11:57.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000939849 s, 4.4 MB/s 00:11:57.349 10:37:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.349 10:37:23 -- common/autotest_common.sh@874 -- # size=4096 00:11:57.349 10:37:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.349 10:37:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:57.349 10:37:23 -- common/autotest_common.sh@877 -- # return 0 00:11:57.349 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.349 10:37:23 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.349 10:37:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:57.607 10:37:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:57.607 10:37:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:57.607 10:37:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:57.607 10:37:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:11:57.607 10:37:24 -- common/autotest_common.sh@857 -- # local i 00:11:57.607 10:37:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:57.607 10:37:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:57.607 10:37:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:11:57.607 10:37:24 -- common/autotest_common.sh@861 -- # break 00:11:57.607 10:37:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:57.607 10:37:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:57.607 10:37:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.607 1+0 records in 00:11:57.607 1+0 records out 00:11:57.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102137 s, 4.0 MB/s 00:11:57.607 10:37:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.607 10:37:24 -- common/autotest_common.sh@874 -- # size=4096 00:11:57.607 10:37:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.607 10:37:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:57.607 10:37:24 -- common/autotest_common.sh@877 -- # return 0 00:11:57.607 10:37:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.607 10:37:24 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.607 10:37:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:57.865 10:37:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:57.865 10:37:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:57.865 10:37:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:57.865 10:37:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:11:57.865 10:37:24 -- common/autotest_common.sh@857 -- # local i 00:11:57.865 10:37:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:57.865 10:37:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:57.865 10:37:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:11:57.865 10:37:24 -- common/autotest_common.sh@861 -- # break 00:11:57.865 10:37:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:57.865 10:37:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:57.865 10:37:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.865 1+0 records in 00:11:57.865 1+0 records out 00:11:57.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000936319 s, 4.4 MB/s 00:11:57.865 10:37:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.865 10:37:24 -- common/autotest_common.sh@874 -- # size=4096 00:11:57.865 10:37:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.865 10:37:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:57.865 10:37:24 -- common/autotest_common.sh@877 -- # return 0 00:11:57.865 10:37:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.865 10:37:24 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.865 10:37:24 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:58.122 10:37:24 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:58.122 10:37:24 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:58.122 10:37:24 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:58.122 10:37:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:11:58.122 10:37:24 -- common/autotest_common.sh@857 -- # local i 00:11:58.122 10:37:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:11:58.122 10:37:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:11:58.122 10:37:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:11:58.122 10:37:24 -- common/autotest_common.sh@861 -- # break 00:11:58.122 10:37:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:11:58.122 10:37:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:11:58.122 10:37:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.122 1+0 records in 00:11:58.122 1+0 records out 00:11:58.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137945 s, 3.0 MB/s 00:11:58.122 10:37:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.122 10:37:24 -- common/autotest_common.sh@874 -- # size=4096 00:11:58.122 10:37:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.122 10:37:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:11:58.122 10:37:24 -- common/autotest_common.sh@877 -- # return 0 00:11:58.122 10:37:24 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.122 10:37:24 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.122 10:37:24 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd0", 00:11:58.380 "bdev_name": "Malloc0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd1", 00:11:58.380 "bdev_name": "Malloc1p0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd2", 00:11:58.380 "bdev_name": "Malloc1p1" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd3", 00:11:58.380 "bdev_name": "Malloc2p0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd4", 00:11:58.380 "bdev_name": "Malloc2p1" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd5", 00:11:58.380 "bdev_name": "Malloc2p2" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd6", 00:11:58.380 "bdev_name": "Malloc2p3" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd7", 00:11:58.380 "bdev_name": "Malloc2p4" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd8", 00:11:58.380 "bdev_name": "Malloc2p5" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd9", 00:11:58.380 "bdev_name": "Malloc2p6" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd10", 00:11:58.380 "bdev_name": "Malloc2p7" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd11", 00:11:58.380 "bdev_name": "TestPT" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd12", 00:11:58.380 "bdev_name": "raid0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd13", 00:11:58.380 "bdev_name": "concat0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd14", 00:11:58.380 "bdev_name": "raid1" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd15", 00:11:58.380 "bdev_name": "AIO0" 00:11:58.380 } 00:11:58.380 ]' 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd0", 00:11:58.380 "bdev_name": "Malloc0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd1", 00:11:58.380 "bdev_name": "Malloc1p0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd2", 00:11:58.380 "bdev_name": "Malloc1p1" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd3", 00:11:58.380 "bdev_name": "Malloc2p0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd4", 00:11:58.380 "bdev_name": "Malloc2p1" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd5", 00:11:58.380 "bdev_name": "Malloc2p2" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd6", 00:11:58.380 "bdev_name": "Malloc2p3" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd7", 00:11:58.380 "bdev_name": "Malloc2p4" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd8", 00:11:58.380 "bdev_name": "Malloc2p5" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd9", 00:11:58.380 "bdev_name": "Malloc2p6" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd10", 00:11:58.380 "bdev_name": "Malloc2p7" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd11", 00:11:58.380 "bdev_name": "TestPT" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd12", 00:11:58.380 "bdev_name": "raid0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd13", 00:11:58.380 "bdev_name": "concat0" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd14", 00:11:58.380 "bdev_name": "raid1" 00:11:58.380 }, 00:11:58.380 { 00:11:58.380 "nbd_device": "/dev/nbd15", 00:11:58.380 "bdev_name": "AIO0" 00:11:58.380 } 00:11:58.380 ]' 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@51 -- # local i 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.380 10:37:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:58.638 10:37:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:58.638 10:37:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@41 -- # break 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.639 10:37:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@41 -- # break 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@45 -- # return 0 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:58.896 10:37:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@41 -- # break 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.154 10:37:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@41 -- # break 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.412 10:37:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@41 -- # break 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.676 10:37:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@41 -- # break 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.948 10:37:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@41 -- # break 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.206 10:37:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:00.463 10:37:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@41 -- # break 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.721 10:37:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@41 -- # break 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.999 10:37:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@41 -- # break 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.257 10:37:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@41 -- # break 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.515 10:37:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:01.515 10:37:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@41 -- # break 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.773 10:37:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@41 -- # break 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.031 10:37:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@41 -- # break 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.289 10:37:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@41 -- # break 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.547 10:37:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@41 -- # break 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:02.805 10:37:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@65 -- # true 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@65 -- # count=0 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@122 -- # count=0 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@127 -- # return 0 00:12:03.063 10:37:29 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@12 -- # local i 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.063 10:37:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:03.321 /dev/nbd0 00:12:03.321 10:37:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.321 10:37:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.321 10:37:29 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:03.321 10:37:29 -- common/autotest_common.sh@857 -- # local i 00:12:03.321 10:37:29 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.321 10:37:29 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.321 10:37:29 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:03.321 10:37:29 -- common/autotest_common.sh@861 -- # break 00:12:03.321 10:37:29 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.321 10:37:29 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.321 10:37:29 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.321 1+0 records in 00:12:03.321 1+0 records out 00:12:03.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438763 s, 9.3 MB/s 00:12:03.321 10:37:29 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.321 10:37:29 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.321 10:37:29 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.321 10:37:29 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.321 10:37:29 -- common/autotest_common.sh@877 -- # return 0 00:12:03.321 10:37:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.321 10:37:29 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.321 10:37:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:03.887 /dev/nbd1 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.887 10:37:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:03.887 10:37:30 -- common/autotest_common.sh@857 -- # local i 00:12:03.887 10:37:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:03.887 10:37:30 -- common/autotest_common.sh@861 -- # break 00:12:03.887 10:37:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.887 1+0 records in 00:12:03.887 1+0 records out 00:12:03.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594961 s, 6.9 MB/s 00:12:03.887 10:37:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.887 10:37:30 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.887 10:37:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.887 10:37:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.887 10:37:30 -- common/autotest_common.sh@877 -- # return 0 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:03.887 /dev/nbd10 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:03.887 10:37:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:03.887 10:37:30 -- common/autotest_common.sh@857 -- # local i 00:12:03.887 10:37:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:03.887 10:37:30 -- common/autotest_common.sh@861 -- # break 00:12:03.887 10:37:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:03.887 10:37:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.887 1+0 records in 00:12:03.887 1+0 records out 00:12:03.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589194 s, 7.0 MB/s 00:12:03.887 10:37:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.887 10:37:30 -- common/autotest_common.sh@874 -- # size=4096 00:12:03.887 10:37:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.887 10:37:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:03.887 10:37:30 -- common/autotest_common.sh@877 -- # return 0 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:03.887 10:37:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:04.145 /dev/nbd11 00:12:04.145 10:37:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:04.145 10:37:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:04.145 10:37:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:04.145 10:37:30 -- common/autotest_common.sh@857 -- # local i 00:12:04.145 10:37:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.145 10:37:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.145 10:37:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:04.145 10:37:30 -- common/autotest_common.sh@861 -- # break 00:12:04.145 10:37:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.145 10:37:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.145 10:37:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.145 1+0 records in 00:12:04.145 1+0 records out 00:12:04.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862919 s, 4.7 MB/s 00:12:04.145 10:37:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.145 10:37:30 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.145 10:37:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.145 10:37:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.145 10:37:30 -- common/autotest_common.sh@877 -- # return 0 00:12:04.145 10:37:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.145 10:37:30 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.145 10:37:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:04.736 /dev/nbd12 00:12:04.736 10:37:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:04.736 10:37:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:04.736 10:37:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:04.736 10:37:31 -- common/autotest_common.sh@857 -- # local i 00:12:04.736 10:37:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.736 10:37:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.736 10:37:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:04.736 10:37:31 -- common/autotest_common.sh@861 -- # break 00:12:04.736 10:37:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.736 10:37:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.736 10:37:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.736 1+0 records in 00:12:04.736 1+0 records out 00:12:04.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464842 s, 8.8 MB/s 00:12:04.736 10:37:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.736 10:37:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.736 10:37:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.736 10:37:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.736 10:37:31 -- common/autotest_common.sh@877 -- # return 0 00:12:04.736 10:37:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.736 10:37:31 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.737 10:37:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:04.993 /dev/nbd13 00:12:04.993 10:37:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:04.993 10:37:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:04.993 10:37:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:04.993 10:37:31 -- common/autotest_common.sh@857 -- # local i 00:12:04.993 10:37:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:04.993 10:37:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:04.993 10:37:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:04.993 10:37:31 -- common/autotest_common.sh@861 -- # break 00:12:04.993 10:37:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:04.993 10:37:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:04.993 10:37:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:04.993 1+0 records in 00:12:04.993 1+0 records out 00:12:04.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467387 s, 8.8 MB/s 00:12:04.993 10:37:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.993 10:37:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:04.993 10:37:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:04.993 10:37:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:04.993 10:37:31 -- common/autotest_common.sh@877 -- # return 0 00:12:04.994 10:37:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:04.994 10:37:31 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:04.994 10:37:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:05.251 /dev/nbd14 00:12:05.251 10:37:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:05.251 10:37:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:05.251 10:37:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:05.251 10:37:31 -- common/autotest_common.sh@857 -- # local i 00:12:05.251 10:37:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.251 10:37:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.251 10:37:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:05.251 10:37:31 -- common/autotest_common.sh@861 -- # break 00:12:05.251 10:37:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.251 10:37:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.251 10:37:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.251 1+0 records in 00:12:05.251 1+0 records out 00:12:05.251 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471016 s, 8.7 MB/s 00:12:05.251 10:37:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.251 10:37:31 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.251 10:37:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.251 10:37:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.251 10:37:31 -- common/autotest_common.sh@877 -- # return 0 00:12:05.251 10:37:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.251 10:37:31 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.251 10:37:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:05.509 /dev/nbd15 00:12:05.509 10:37:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:05.509 10:37:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:05.509 10:37:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:05.509 10:37:32 -- common/autotest_common.sh@857 -- # local i 00:12:05.509 10:37:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.509 10:37:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.509 10:37:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:05.509 10:37:32 -- common/autotest_common.sh@861 -- # break 00:12:05.509 10:37:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.509 10:37:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.509 10:37:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.509 1+0 records in 00:12:05.509 1+0 records out 00:12:05.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522123 s, 7.8 MB/s 00:12:05.509 10:37:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.509 10:37:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.509 10:37:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.509 10:37:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.509 10:37:32 -- common/autotest_common.sh@877 -- # return 0 00:12:05.509 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.509 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.509 10:37:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:05.767 /dev/nbd2 00:12:05.767 10:37:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:05.767 10:37:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:05.767 10:37:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:05.767 10:37:32 -- common/autotest_common.sh@857 -- # local i 00:12:05.767 10:37:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:05.767 10:37:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:05.767 10:37:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:05.767 10:37:32 -- common/autotest_common.sh@861 -- # break 00:12:05.767 10:37:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:05.767 10:37:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:05.767 10:37:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.767 1+0 records in 00:12:05.767 1+0 records out 00:12:05.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560869 s, 7.3 MB/s 00:12:05.767 10:37:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.767 10:37:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:05.767 10:37:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.767 10:37:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:05.767 10:37:32 -- common/autotest_common.sh@877 -- # return 0 00:12:05.767 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.767 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.767 10:37:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:06.025 /dev/nbd3 00:12:06.025 10:37:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:06.025 10:37:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:06.025 10:37:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:06.025 10:37:32 -- common/autotest_common.sh@857 -- # local i 00:12:06.025 10:37:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:06.025 10:37:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:06.025 10:37:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:06.026 10:37:32 -- common/autotest_common.sh@861 -- # break 00:12:06.026 10:37:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:06.026 10:37:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:06.026 10:37:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.026 1+0 records in 00:12:06.026 1+0 records out 00:12:06.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000889481 s, 4.6 MB/s 00:12:06.026 10:37:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.026 10:37:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:06.026 10:37:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.026 10:37:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:06.026 10:37:32 -- common/autotest_common.sh@877 -- # return 0 00:12:06.026 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.026 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.026 10:37:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:06.284 /dev/nbd4 00:12:06.284 10:37:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:06.284 10:37:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:06.284 10:37:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:06.284 10:37:32 -- common/autotest_common.sh@857 -- # local i 00:12:06.284 10:37:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:06.284 10:37:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:06.284 10:37:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:06.284 10:37:32 -- common/autotest_common.sh@861 -- # break 00:12:06.284 10:37:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:06.284 10:37:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:06.284 10:37:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.284 1+0 records in 00:12:06.284 1+0 records out 00:12:06.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831796 s, 4.9 MB/s 00:12:06.284 10:37:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.284 10:37:32 -- common/autotest_common.sh@874 -- # size=4096 00:12:06.284 10:37:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.284 10:37:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:06.284 10:37:32 -- common/autotest_common.sh@877 -- # return 0 00:12:06.284 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.284 10:37:32 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.284 10:37:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:06.542 /dev/nbd5 00:12:06.799 10:37:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:06.799 10:37:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:06.799 10:37:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:06.799 10:37:33 -- common/autotest_common.sh@857 -- # local i 00:12:06.799 10:37:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:06.799 10:37:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:06.799 10:37:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:06.799 10:37:33 -- common/autotest_common.sh@861 -- # break 00:12:06.799 10:37:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:06.799 10:37:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:06.800 10:37:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.800 1+0 records in 00:12:06.800 1+0 records out 00:12:06.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636085 s, 6.4 MB/s 00:12:06.800 10:37:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.800 10:37:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:06.800 10:37:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.800 10:37:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:06.800 10:37:33 -- common/autotest_common.sh@877 -- # return 0 00:12:06.800 10:37:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.800 10:37:33 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.800 10:37:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:07.057 /dev/nbd6 00:12:07.057 10:37:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:07.057 10:37:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:07.057 10:37:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:07.057 10:37:33 -- common/autotest_common.sh@857 -- # local i 00:12:07.057 10:37:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:07.057 10:37:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:07.057 10:37:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:07.057 10:37:33 -- common/autotest_common.sh@861 -- # break 00:12:07.057 10:37:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:07.057 10:37:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:07.057 10:37:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.057 1+0 records in 00:12:07.057 1+0 records out 00:12:07.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647011 s, 6.3 MB/s 00:12:07.057 10:37:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.057 10:37:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:07.057 10:37:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.057 10:37:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:07.057 10:37:33 -- common/autotest_common.sh@877 -- # return 0 00:12:07.057 10:37:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.057 10:37:33 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.057 10:37:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:07.316 /dev/nbd7 00:12:07.316 10:37:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:07.316 10:37:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:07.316 10:37:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:07.316 10:37:33 -- common/autotest_common.sh@857 -- # local i 00:12:07.316 10:37:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:07.316 10:37:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:07.316 10:37:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:07.316 10:37:33 -- common/autotest_common.sh@861 -- # break 00:12:07.316 10:37:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:07.316 10:37:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:07.316 10:37:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.316 1+0 records in 00:12:07.316 1+0 records out 00:12:07.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000826308 s, 5.0 MB/s 00:12:07.316 10:37:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.316 10:37:33 -- common/autotest_common.sh@874 -- # size=4096 00:12:07.316 10:37:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.316 10:37:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:07.316 10:37:33 -- common/autotest_common.sh@877 -- # return 0 00:12:07.316 10:37:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.316 10:37:33 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.316 10:37:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:07.574 /dev/nbd8 00:12:07.574 10:37:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:07.574 10:37:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:07.574 10:37:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:07.574 10:37:34 -- common/autotest_common.sh@857 -- # local i 00:12:07.574 10:37:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:07.574 10:37:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:07.574 10:37:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:07.574 10:37:34 -- common/autotest_common.sh@861 -- # break 00:12:07.574 10:37:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:07.574 10:37:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:07.574 10:37:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.574 1+0 records in 00:12:07.574 1+0 records out 00:12:07.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591751 s, 6.9 MB/s 00:12:07.574 10:37:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.574 10:37:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:07.574 10:37:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.574 10:37:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:07.574 10:37:34 -- common/autotest_common.sh@877 -- # return 0 00:12:07.574 10:37:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.574 10:37:34 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.574 10:37:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:07.832 /dev/nbd9 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:08.090 10:37:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:08.090 10:37:34 -- common/autotest_common.sh@857 -- # local i 00:12:08.090 10:37:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:08.090 10:37:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:08.090 10:37:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:08.090 10:37:34 -- common/autotest_common.sh@861 -- # break 00:12:08.090 10:37:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:08.090 10:37:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:08.090 10:37:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.090 1+0 records in 00:12:08.090 1+0 records out 00:12:08.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123269 s, 3.3 MB/s 00:12:08.090 10:37:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.090 10:37:34 -- common/autotest_common.sh@874 -- # size=4096 00:12:08.090 10:37:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.090 10:37:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:08.090 10:37:34 -- common/autotest_common.sh@877 -- # return 0 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:08.090 10:37:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:08.348 10:37:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd0", 00:12:08.348 "bdev_name": "Malloc0" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd1", 00:12:08.348 "bdev_name": "Malloc1p0" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd10", 00:12:08.348 "bdev_name": "Malloc1p1" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd11", 00:12:08.348 "bdev_name": "Malloc2p0" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd12", 00:12:08.348 "bdev_name": "Malloc2p1" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd13", 00:12:08.348 "bdev_name": "Malloc2p2" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd14", 00:12:08.348 "bdev_name": "Malloc2p3" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd15", 00:12:08.348 "bdev_name": "Malloc2p4" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd2", 00:12:08.348 "bdev_name": "Malloc2p5" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd3", 00:12:08.348 "bdev_name": "Malloc2p6" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd4", 00:12:08.348 "bdev_name": "Malloc2p7" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd5", 00:12:08.348 "bdev_name": "TestPT" 00:12:08.348 }, 00:12:08.348 { 00:12:08.348 "nbd_device": "/dev/nbd6", 00:12:08.348 "bdev_name": "raid0" 00:12:08.348 }, 00:12:08.348 { 00:12:08.349 "nbd_device": "/dev/nbd7", 00:12:08.349 "bdev_name": "concat0" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd8", 00:12:08.349 "bdev_name": "raid1" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd9", 00:12:08.349 "bdev_name": "AIO0" 00:12:08.349 } 00:12:08.349 ]' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd0", 00:12:08.349 "bdev_name": "Malloc0" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd1", 00:12:08.349 "bdev_name": "Malloc1p0" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd10", 00:12:08.349 "bdev_name": "Malloc1p1" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd11", 00:12:08.349 "bdev_name": "Malloc2p0" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd12", 00:12:08.349 "bdev_name": "Malloc2p1" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd13", 00:12:08.349 "bdev_name": "Malloc2p2" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd14", 00:12:08.349 "bdev_name": "Malloc2p3" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd15", 00:12:08.349 "bdev_name": "Malloc2p4" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd2", 00:12:08.349 "bdev_name": "Malloc2p5" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd3", 00:12:08.349 "bdev_name": "Malloc2p6" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd4", 00:12:08.349 "bdev_name": "Malloc2p7" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd5", 00:12:08.349 "bdev_name": "TestPT" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd6", 00:12:08.349 "bdev_name": "raid0" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd7", 00:12:08.349 "bdev_name": "concat0" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd8", 00:12:08.349 "bdev_name": "raid1" 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "nbd_device": "/dev/nbd9", 00:12:08.349 "bdev_name": "AIO0" 00:12:08.349 } 00:12:08.349 ]' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:08.349 /dev/nbd1 00:12:08.349 /dev/nbd10 00:12:08.349 /dev/nbd11 00:12:08.349 /dev/nbd12 00:12:08.349 /dev/nbd13 00:12:08.349 /dev/nbd14 00:12:08.349 /dev/nbd15 00:12:08.349 /dev/nbd2 00:12:08.349 /dev/nbd3 00:12:08.349 /dev/nbd4 00:12:08.349 /dev/nbd5 00:12:08.349 /dev/nbd6 00:12:08.349 /dev/nbd7 00:12:08.349 /dev/nbd8 00:12:08.349 /dev/nbd9' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:08.349 /dev/nbd1 00:12:08.349 /dev/nbd10 00:12:08.349 /dev/nbd11 00:12:08.349 /dev/nbd12 00:12:08.349 /dev/nbd13 00:12:08.349 /dev/nbd14 00:12:08.349 /dev/nbd15 00:12:08.349 /dev/nbd2 00:12:08.349 /dev/nbd3 00:12:08.349 /dev/nbd4 00:12:08.349 /dev/nbd5 00:12:08.349 /dev/nbd6 00:12:08.349 /dev/nbd7 00:12:08.349 /dev/nbd8 00:12:08.349 /dev/nbd9' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@65 -- # count=16 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@95 -- # count=16 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:08.349 256+0 records in 00:12:08.349 256+0 records out 00:12:08.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00893293 s, 117 MB/s 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.349 10:37:34 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:08.349 256+0 records in 00:12:08.349 256+0 records out 00:12:08.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143987 s, 7.3 MB/s 00:12:08.349 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.349 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:08.607 256+0 records in 00:12:08.607 256+0 records out 00:12:08.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146324 s, 7.2 MB/s 00:12:08.607 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.607 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:08.865 256+0 records in 00:12:08.865 256+0 records out 00:12:08.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146177 s, 7.2 MB/s 00:12:08.865 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.865 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:08.865 256+0 records in 00:12:08.865 256+0 records out 00:12:08.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148198 s, 7.1 MB/s 00:12:08.865 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:08.865 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:09.123 256+0 records in 00:12:09.123 256+0 records out 00:12:09.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146043 s, 7.2 MB/s 00:12:09.123 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.123 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:09.123 256+0 records in 00:12:09.123 256+0 records out 00:12:09.123 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14504 s, 7.2 MB/s 00:12:09.123 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.123 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:09.379 256+0 records in 00:12:09.379 256+0 records out 00:12:09.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146179 s, 7.2 MB/s 00:12:09.379 10:37:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.379 10:37:35 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:09.379 256+0 records in 00:12:09.379 256+0 records out 00:12:09.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146646 s, 7.2 MB/s 00:12:09.637 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.637 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:09.637 256+0 records in 00:12:09.637 256+0 records out 00:12:09.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146419 s, 7.2 MB/s 00:12:09.637 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.637 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:09.894 256+0 records in 00:12:09.894 256+0 records out 00:12:09.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145655 s, 7.2 MB/s 00:12:09.894 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.894 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:09.894 256+0 records in 00:12:09.894 256+0 records out 00:12:09.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145522 s, 7.2 MB/s 00:12:09.894 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.894 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:10.152 256+0 records in 00:12:10.152 256+0 records out 00:12:10.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146554 s, 7.2 MB/s 00:12:10.152 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.152 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:10.152 256+0 records in 00:12:10.152 256+0 records out 00:12:10.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147594 s, 7.1 MB/s 00:12:10.152 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.152 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:10.410 256+0 records in 00:12:10.410 256+0 records out 00:12:10.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148538 s, 7.1 MB/s 00:12:10.410 10:37:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.410 10:37:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:10.668 256+0 records in 00:12:10.668 256+0 records out 00:12:10.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152483 s, 6.9 MB/s 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:10.668 256+0 records in 00:12:10.668 256+0 records out 00:12:10.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197138 s, 5.3 MB/s 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.668 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.926 10:37:37 -- bdev/nbd_common.sh@51 -- # local i 00:12:10.927 10:37:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.927 10:37:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@41 -- # break 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.184 10:37:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@41 -- # break 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.441 10:37:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@41 -- # break 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.699 10:37:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@41 -- # break 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.957 10:37:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@41 -- # break 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.215 10:37:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@41 -- # break 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.474 10:37:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@41 -- # break 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.732 10:37:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@41 -- # break 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.990 10:37:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@41 -- # break 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.248 10:37:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@41 -- # break 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.530 10:37:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:13.788 10:37:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:13.788 10:37:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:13.788 10:37:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:13.788 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.789 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.789 10:37:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:13.789 10:37:40 -- bdev/nbd_common.sh@41 -- # break 00:12:13.789 10:37:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.789 10:37:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.789 10:37:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@41 -- # break 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.047 10:37:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@41 -- # break 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.305 10:37:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@41 -- # break 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.563 10:37:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@41 -- # break 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.822 10:37:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@41 -- # break 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.080 10:37:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@65 -- # true 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@65 -- # count=0 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@104 -- # count=0 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@109 -- # return 0 00:12:15.338 10:37:41 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:15.338 10:37:41 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:15.596 malloc_lvol_verify 00:12:15.596 10:37:42 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:15.853 dc98e246-2dca-4cc1-85bc-fe2017893c42 00:12:15.853 10:37:42 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:16.119 4a27fd12-d50e-48dc-a47e-a211fe72a4ff 00:12:16.119 10:37:42 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:16.377 /dev/nbd0 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:16.377 mke2fs 1.46.5 (30-Dec-2021) 00:12:16.377 00:12:16.377 Filesystem too small for a journal 00:12:16.377 Discarding device blocks: 0/1024 done 00:12:16.377 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:16.377 00:12:16.377 Allocating group tables: 0/1 done 00:12:16.377 Writing inode tables: 0/1 done 00:12:16.377 Writing superblocks and filesystem accounting information: 0/1 done 00:12:16.377 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@51 -- # local i 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.377 10:37:42 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@41 -- # break 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:16.636 10:37:43 -- bdev/nbd_common.sh@147 -- # return 0 00:12:16.636 10:37:43 -- bdev/blockdev.sh@324 -- # killprocess 119849 00:12:16.636 10:37:43 -- common/autotest_common.sh@926 -- # '[' -z 119849 ']' 00:12:16.636 10:37:43 -- common/autotest_common.sh@930 -- # kill -0 119849 00:12:16.636 10:37:43 -- common/autotest_common.sh@931 -- # uname 00:12:16.636 10:37:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:16.636 10:37:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119849 00:12:16.636 10:37:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:16.636 10:37:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:16.636 10:37:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119849' 00:12:16.636 killing process with pid 119849 00:12:16.636 10:37:43 -- common/autotest_common.sh@945 -- # kill 119849 00:12:16.636 10:37:43 -- common/autotest_common.sh@950 -- # wait 119849 00:12:17.214 10:37:43 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:17.215 00:12:17.215 real 0m24.555s 00:12:17.215 user 0m35.021s 00:12:17.215 sys 0m9.000s 00:12:17.215 10:37:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:17.215 10:37:43 -- common/autotest_common.sh@10 -- # set +x 00:12:17.215 ************************************ 00:12:17.215 END TEST bdev_nbd 00:12:17.215 ************************************ 00:12:17.215 10:37:43 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:17.215 10:37:43 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.215 10:37:43 -- common/autotest_common.sh@10 -- # set +x 00:12:17.215 ************************************ 00:12:17.215 START TEST bdev_fio 00:12:17.215 ************************************ 00:12:17.215 10:37:43 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@329 -- # local env_context 00:12:17.215 10:37:43 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:17.215 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:17.215 10:37:43 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:17.215 10:37:43 -- bdev/blockdev.sh@337 -- # echo '' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:17.215 10:37:43 -- bdev/blockdev.sh@337 -- # env_context= 00:12:17.215 10:37:43 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.215 10:37:43 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:17.215 10:37:43 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:17.215 10:37:43 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:17.215 10:37:43 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:17.215 10:37:43 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:17.215 10:37:43 -- common/autotest_common.sh@1280 -- # cat 00:12:17.215 10:37:43 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1293 -- # cat 00:12:17.215 10:37:43 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:17.215 10:37:43 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:17.215 10:37:43 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:17.215 10:37:43 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:17.215 10:37:43 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:17.215 10:37:43 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:17.215 10:37:43 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.215 10:37:43 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:17.215 10:37:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.215 10:37:43 -- common/autotest_common.sh@10 -- # set +x 00:12:17.215 ************************************ 00:12:17.215 START TEST bdev_fio_rw_verify 00:12:17.215 ************************************ 00:12:17.215 10:37:43 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.215 10:37:43 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.215 10:37:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:17.215 10:37:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:17.215 10:37:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:17.215 10:37:43 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:17.215 10:37:43 -- common/autotest_common.sh@1320 -- # shift 00:12:17.215 10:37:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:17.215 10:37:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:17.215 10:37:43 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:17.215 10:37:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:17.215 10:37:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:17.486 10:37:43 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:17.486 10:37:43 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:17.486 10:37:43 -- common/autotest_common.sh@1326 -- # break 00:12:17.486 10:37:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:17.486 10:37:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:17.486 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:17.486 fio-3.35 00:12:17.486 Starting 16 threads 00:12:29.700 00:12:29.700 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=121003: Wed Jul 24 10:37:54 2024 00:12:29.700 read: IOPS=69.8k, BW=273MiB/s (286MB/s)(2727MiB/10001msec) 00:12:29.700 slat (usec): min=2, max=44063, avg=40.94, stdev=465.23 00:12:29.700 clat (usec): min=8, max=44343, avg=315.63, stdev=1323.43 00:12:29.700 lat (usec): min=30, max=44383, avg=356.57, stdev=1402.44 00:12:29.700 clat percentiles (usec): 00:12:29.700 | 50.000th=[ 192], 99.000th=[ 783], 99.900th=[16450], 99.990th=[32113], 00:12:29.700 | 99.999th=[44303] 00:12:29.700 write: IOPS=109k, BW=426MiB/s (447MB/s)(4229MiB/9918msec); 0 zone resets 00:12:29.700 slat (usec): min=6, max=79681, avg=75.31, stdev=727.29 00:12:29.700 clat (usec): min=9, max=80027, avg=424.99, stdev=1670.22 00:12:29.700 lat (usec): min=39, max=80071, avg=500.29, stdev=1821.21 00:12:29.700 clat percentiles (usec): 00:12:29.700 | 50.000th=[ 245], 99.000th=[ 8291], 99.900th=[22152], 99.990th=[37487], 00:12:29.700 | 99.999th=[63701] 00:12:29.700 bw ( KiB/s): min=256632, max=702928, per=98.79%, avg=431390.05, stdev=8218.40, samples=304 00:12:29.700 iops : min=64158, max=175732, avg=107847.37, stdev=2054.62, samples=304 00:12:29.700 lat (usec) : 10=0.01%, 20=0.01%, 50=0.66%, 100=9.11%, 250=49.94% 00:12:29.700 lat (usec) : 500=36.67%, 750=1.99%, 1000=0.34% 00:12:29.700 lat (msec) : 2=0.19%, 4=0.08%, 10=0.20%, 20=0.71%, 50=0.11% 00:12:29.700 lat (msec) : 100=0.01% 00:12:29.700 cpu : usr=55.79%, sys=2.48%, ctx=235621, majf=2, minf=85366 00:12:29.700 IO depths : 1=11.2%, 2=23.7%, 4=52.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:29.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.700 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:29.700 issued rwts: total=698119,1082691,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:29.700 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:29.700 00:12:29.700 Run status group 0 (all jobs): 00:12:29.700 READ: bw=273MiB/s (286MB/s), 273MiB/s-273MiB/s (286MB/s-286MB/s), io=2727MiB (2859MB), run=10001-10001msec 00:12:29.700 WRITE: bw=426MiB/s (447MB/s), 426MiB/s-426MiB/s (447MB/s-447MB/s), io=4229MiB (4435MB), run=9918-9918msec 00:12:29.700 ----------------------------------------------------- 00:12:29.700 Suppressions used: 00:12:29.700 count bytes template 00:12:29.700 16 140 /usr/src/fio/parse.c 00:12:29.700 9650 926400 /usr/src/fio/iolog.c 00:12:29.700 1 904 libcrypto.so 00:12:29.700 ----------------------------------------------------- 00:12:29.700 00:12:29.700 ************************************ 00:12:29.700 END TEST bdev_fio_rw_verify 00:12:29.700 ************************************ 00:12:29.700 00:12:29.700 real 0m11.897s 00:12:29.700 user 1m31.898s 00:12:29.700 sys 0m5.069s 00:12:29.700 10:37:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.700 10:37:55 -- common/autotest_common.sh@10 -- # set +x 00:12:29.700 10:37:55 -- bdev/blockdev.sh@348 -- # rm -f 00:12:29.700 10:37:55 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:29.700 10:37:55 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:29.700 10:37:55 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:29.700 10:37:55 -- common/autotest_common.sh@1260 -- # local workload=trim 00:12:29.700 10:37:55 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:12:29.700 10:37:55 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:29.700 10:37:55 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:29.700 10:37:55 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:29.700 10:37:55 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:12:29.700 10:37:55 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:29.700 10:37:55 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:29.700 10:37:55 -- common/autotest_common.sh@1280 -- # cat 00:12:29.700 10:37:55 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:12:29.700 10:37:55 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:12:29.700 10:37:55 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:12:29.700 10:37:55 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:29.701 10:37:55 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d6977d22-668c-41f5-8c35-859816c5244e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6977d22-668c-41f5-8c35-859816c5244e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8e8bfec5-33dd-5d89-b756-a1409cab9593"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8e8bfec5-33dd-5d89-b756-a1409cab9593",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "355a8041-5461-5912-b6df-32d913c52c35"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "355a8041-5461-5912-b6df-32d913c52c35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bcd665cc-6316-524c-b96d-fbc52b54a800"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bcd665cc-6316-524c-b96d-fbc52b54a800",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "9037478a-c430-5e4c-ba63-f05395d232f3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9037478a-c430-5e4c-ba63-f05395d232f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "76a608af-2336-56a7-9837-1406fa691046"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "76a608af-2336-56a7-9837-1406fa691046",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c2e465e8-673d-5387-ac90-ee64c349ead8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c2e465e8-673d-5387-ac90-ee64c349ead8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "ae68e692-c0be-5b7b-98cc-3046bd1a9e34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae68e692-c0be-5b7b-98cc-3046bd1a9e34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "1ad47b23-c6ab-54dc-b8a7-6c145a812a88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ad47b23-c6ab-54dc-b8a7-6c145a812a88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0ee9ad56-9d5f-5382-b818-35d031e09f2a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0ee9ad56-9d5f-5382-b818-35d031e09f2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "3f030dc7-69fa-5fcd-aeb0-dfe80b67ab80"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3f030dc7-69fa-5fcd-aeb0-dfe80b67ab80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "303ac22a-fbd5-5cf4-a1ce-79ce49eaa531"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "303ac22a-fbd5-5cf4-a1ce-79ce49eaa531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "ce15e5b2-1808-46a4-8157-8c01c42c9619"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce15e5b2-1808-46a4-8157-8c01c42c9619",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce15e5b2-1808-46a4-8157-8c01c42c9619",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c4f7dd89-5baa-40a2-b68c-b2fb57c26d6d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "21839463-bb07-43b2-ab52-90c8cca2ff8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "88819bed-3f7d-4ab6-9ba7-d06dd67bf543"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "88819bed-3f7d-4ab6-9ba7-d06dd67bf543",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "88819bed-3f7d-4ab6-9ba7-d06dd67bf543",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f6646fac-391a-438a-9a8e-34f4813533d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ef786fc3-b9f3-4d50-8a55-4788aa0d523b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "867d3e01-1a46-45c2-aafc-6b430032e71e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "867d3e01-1a46-45c2-aafc-6b430032e71e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "867d3e01-1a46-45c2-aafc-6b430032e71e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d7657b0a-a85a-4670-acfe-25a5bc79ddb3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "1599c0ea-fd25-4887-886f-1f0c00194b1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "f6a85a9b-b08c-4622-ace3-d970dd8aac9d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "f6a85a9b-b08c-4622-ace3-d970dd8aac9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:29.701 10:37:55 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:29.701 Malloc1p0 00:12:29.701 Malloc1p1 00:12:29.701 Malloc2p0 00:12:29.701 Malloc2p1 00:12:29.701 Malloc2p2 00:12:29.701 Malloc2p3 00:12:29.701 Malloc2p4 00:12:29.701 Malloc2p5 00:12:29.701 Malloc2p6 00:12:29.701 Malloc2p7 00:12:29.701 TestPT 00:12:29.701 raid0 00:12:29.701 concat0 ]] 00:12:29.702 10:37:55 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d6977d22-668c-41f5-8c35-859816c5244e"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d6977d22-668c-41f5-8c35-859816c5244e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "8e8bfec5-33dd-5d89-b756-a1409cab9593"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "8e8bfec5-33dd-5d89-b756-a1409cab9593",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "355a8041-5461-5912-b6df-32d913c52c35"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "355a8041-5461-5912-b6df-32d913c52c35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "bcd665cc-6316-524c-b96d-fbc52b54a800"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bcd665cc-6316-524c-b96d-fbc52b54a800",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "9037478a-c430-5e4c-ba63-f05395d232f3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9037478a-c430-5e4c-ba63-f05395d232f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "76a608af-2336-56a7-9837-1406fa691046"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "76a608af-2336-56a7-9837-1406fa691046",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "c2e465e8-673d-5387-ac90-ee64c349ead8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c2e465e8-673d-5387-ac90-ee64c349ead8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "ae68e692-c0be-5b7b-98cc-3046bd1a9e34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ae68e692-c0be-5b7b-98cc-3046bd1a9e34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "1ad47b23-c6ab-54dc-b8a7-6c145a812a88"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1ad47b23-c6ab-54dc-b8a7-6c145a812a88",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0ee9ad56-9d5f-5382-b818-35d031e09f2a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0ee9ad56-9d5f-5382-b818-35d031e09f2a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "3f030dc7-69fa-5fcd-aeb0-dfe80b67ab80"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3f030dc7-69fa-5fcd-aeb0-dfe80b67ab80",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "303ac22a-fbd5-5cf4-a1ce-79ce49eaa531"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "303ac22a-fbd5-5cf4-a1ce-79ce49eaa531",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "ce15e5b2-1808-46a4-8157-8c01c42c9619"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "ce15e5b2-1808-46a4-8157-8c01c42c9619",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce15e5b2-1808-46a4-8157-8c01c42c9619",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c4f7dd89-5baa-40a2-b68c-b2fb57c26d6d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "21839463-bb07-43b2-ab52-90c8cca2ff8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "88819bed-3f7d-4ab6-9ba7-d06dd67bf543"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "88819bed-3f7d-4ab6-9ba7-d06dd67bf543",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "88819bed-3f7d-4ab6-9ba7-d06dd67bf543",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "f6646fac-391a-438a-9a8e-34f4813533d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ef786fc3-b9f3-4d50-8a55-4788aa0d523b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "867d3e01-1a46-45c2-aafc-6b430032e71e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "867d3e01-1a46-45c2-aafc-6b430032e71e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "867d3e01-1a46-45c2-aafc-6b430032e71e",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "d7657b0a-a85a-4670-acfe-25a5bc79ddb3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "1599c0ea-fd25-4887-886f-1f0c00194b1c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "f6a85a9b-b08c-4622-ace3-d970dd8aac9d"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "f6a85a9b-b08c-4622-ace3-d970dd8aac9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:29.703 10:37:55 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:29.703 10:37:55 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:29.703 10:37:55 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:29.703 10:37:55 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:29.703 10:37:55 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:29.703 10:37:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:29.703 10:37:55 -- common/autotest_common.sh@10 -- # set +x 00:12:29.703 ************************************ 00:12:29.703 START TEST bdev_fio_trim 00:12:29.703 ************************************ 00:12:29.703 10:37:55 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:29.703 10:37:55 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:29.703 10:37:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:29.703 10:37:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:29.703 10:37:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:29.703 10:37:55 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:29.703 10:37:55 -- common/autotest_common.sh@1320 -- # shift 00:12:29.703 10:37:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:29.703 10:37:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:29.703 10:37:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:29.703 10:37:55 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:29.703 10:37:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:29.703 10:37:55 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:29.703 10:37:55 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:29.703 10:37:55 -- common/autotest_common.sh@1326 -- # break 00:12:29.703 10:37:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:29.703 10:37:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:29.703 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:29.703 fio-3.35 00:12:29.703 Starting 14 threads 00:12:41.907 00:12:41.907 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=121210: Wed Jul 24 10:38:06 2024 00:12:41.907 write: IOPS=140k, BW=547MiB/s (573MB/s)(5472MiB/10008msec); 0 zone resets 00:12:41.907 slat (usec): min=2, max=32059, avg=36.11, stdev=398.41 00:12:41.907 clat (usec): min=24, max=32298, avg=253.98, stdev=1064.05 00:12:41.907 lat (usec): min=37, max=32331, avg=290.09, stdev=1135.66 00:12:41.907 clat percentiles (usec): 00:12:41.907 | 50.000th=[ 165], 99.000th=[ 482], 99.900th=[16319], 99.990th=[20317], 00:12:41.907 | 99.999th=[28181] 00:12:41.907 bw ( KiB/s): min=370869, max=928806, per=100.00%, avg=561530.82, stdev=12013.89, samples=267 00:12:41.907 iops : min=92717, max=232203, avg=140382.68, stdev=3003.48, samples=267 00:12:41.907 trim: IOPS=140k, BW=547MiB/s (573MB/s)(5472MiB/10008msec); 0 zone resets 00:12:41.907 slat (usec): min=4, max=32035, avg=25.08, stdev=337.70 00:12:41.907 clat (usec): min=4, max=32331, avg=268.38, stdev=1076.29 00:12:41.907 lat (usec): min=12, max=32351, avg=293.46, stdev=1127.91 00:12:41.907 clat percentiles (usec): 00:12:41.907 | 50.000th=[ 184], 99.000th=[ 424], 99.900th=[16319], 99.990th=[20317], 00:12:41.907 | 99.999th=[28181] 00:12:41.907 bw ( KiB/s): min=370805, max=928814, per=100.00%, avg=561531.66, stdev=12014.68, samples=267 00:12:41.907 iops : min=92701, max=232203, avg=140382.68, stdev=3003.65, samples=267 00:12:41.907 lat (usec) : 10=0.14%, 20=0.39%, 50=1.46%, 100=11.06%, 250=66.79% 00:12:41.907 lat (usec) : 500=19.42%, 750=0.18%, 1000=0.01% 00:12:41.907 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.48%, 50=0.01% 00:12:41.907 cpu : usr=69.03%, sys=0.38%, ctx=168566, majf=0, minf=9087 00:12:41.907 IO depths : 1=12.2%, 2=24.5%, 4=50.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.907 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.907 issued rwts: total=0,1400951,1400952,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.907 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:41.907 00:12:41.907 Run status group 0 (all jobs): 00:12:41.907 WRITE: bw=547MiB/s (573MB/s), 547MiB/s-547MiB/s (573MB/s-573MB/s), io=5472MiB (5738MB), run=10008-10008msec 00:12:41.907 TRIM: bw=547MiB/s (573MB/s), 547MiB/s-547MiB/s (573MB/s-573MB/s), io=5472MiB (5738MB), run=10008-10008msec 00:12:41.907 ----------------------------------------------------- 00:12:41.907 Suppressions used: 00:12:41.907 count bytes template 00:12:41.907 14 129 /usr/src/fio/parse.c 00:12:41.907 1 904 libcrypto.so 00:12:41.907 ----------------------------------------------------- 00:12:41.907 00:12:41.907 00:12:41.907 real 0m11.727s 00:12:41.907 user 1m39.302s 00:12:41.907 sys 0m1.352s 00:12:41.907 10:38:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.907 ************************************ 00:12:41.907 END TEST bdev_fio_trim 00:12:41.907 ************************************ 00:12:41.907 10:38:07 -- common/autotest_common.sh@10 -- # set +x 00:12:41.907 10:38:07 -- bdev/blockdev.sh@366 -- # rm -f 00:12:41.907 10:38:07 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:41.907 10:38:07 -- bdev/blockdev.sh@368 -- # popd 00:12:41.907 /home/vagrant/spdk_repo/spdk 00:12:41.907 10:38:07 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:41.907 00:12:41.907 real 0m23.996s 00:12:41.907 user 3m11.432s 00:12:41.907 sys 0m6.508s 00:12:41.907 10:38:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.907 10:38:07 -- common/autotest_common.sh@10 -- # set +x 00:12:41.907 ************************************ 00:12:41.907 END TEST bdev_fio 00:12:41.907 ************************************ 00:12:41.907 10:38:07 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:41.907 10:38:07 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:41.907 10:38:07 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:41.907 10:38:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:41.907 10:38:07 -- common/autotest_common.sh@10 -- # set +x 00:12:41.907 ************************************ 00:12:41.907 START TEST bdev_verify 00:12:41.907 ************************************ 00:12:41.908 10:38:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:41.908 [2024-07-24 10:38:07.862750] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:41.908 [2024-07-24 10:38:07.863221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121378 ] 00:12:41.908 [2024-07-24 10:38:08.013784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:41.908 [2024-07-24 10:38:08.113671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.908 [2024-07-24 10:38:08.113677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.908 [2024-07-24 10:38:08.265681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:41.908 [2024-07-24 10:38:08.266119] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:41.908 [2024-07-24 10:38:08.273587] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:41.908 [2024-07-24 10:38:08.273848] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:41.908 [2024-07-24 10:38:08.281657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.908 [2024-07-24 10:38:08.281935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:41.908 [2024-07-24 10:38:08.282097] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:41.908 [2024-07-24 10:38:08.382317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:41.908 [2024-07-24 10:38:08.382760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.908 [2024-07-24 10:38:08.382995] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:41.908 [2024-07-24 10:38:08.383161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.908 [2024-07-24 10:38:08.386329] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.908 [2024-07-24 10:38:08.386524] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:42.167 Running I/O for 5 seconds... 00:12:47.439 00:12:47.439 Latency(us) 00:12:47.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.439 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x1000 00:12:47.439 Malloc0 : 5.19 1546.68 6.04 0.00 0.00 81895.80 1936.29 237359.48 00:12:47.439 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x1000 length 0x1000 00:12:47.439 Malloc0 : 5.17 1528.54 5.97 0.00 0.00 83064.56 2100.13 320292.31 00:12:47.439 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x800 00:12:47.439 Malloc1p0 : 5.19 1077.17 4.21 0.00 0.00 117478.32 4259.84 145847.39 00:12:47.439 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x800 length 0x800 00:12:47.439 Malloc1p0 : 5.17 1081.31 4.22 0.00 0.00 117299.78 4349.21 146800.64 00:12:47.439 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x800 00:12:47.439 Malloc1p1 : 5.19 1076.79 4.21 0.00 0.00 117343.20 4170.47 142034.39 00:12:47.439 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x800 length 0x800 00:12:47.439 Malloc1p1 : 5.17 1081.03 4.22 0.00 0.00 117125.67 4200.26 142034.39 00:12:47.439 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p0 : 5.20 1076.39 4.20 0.00 0.00 117195.25 4051.32 137268.13 00:12:47.439 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p0 : 5.18 1080.74 4.22 0.00 0.00 116958.29 4140.68 137268.13 00:12:47.439 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p1 : 5.20 1075.83 4.20 0.00 0.00 117048.48 4289.63 132501.88 00:12:47.439 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p1 : 5.18 1080.46 4.22 0.00 0.00 116799.97 4289.63 132501.88 00:12:47.439 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p2 : 5.20 1075.25 4.20 0.00 0.00 116936.64 4021.53 128688.87 00:12:47.439 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p2 : 5.18 1080.18 4.22 0.00 0.00 116658.40 4110.89 128688.87 00:12:47.439 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p3 : 5.20 1074.63 4.20 0.00 0.00 116777.25 4200.26 124875.87 00:12:47.439 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p3 : 5.18 1079.91 4.22 0.00 0.00 116487.39 4200.26 123922.62 00:12:47.439 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p4 : 5.21 1073.92 4.20 0.00 0.00 116639.78 4259.84 120586.24 00:12:47.439 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p4 : 5.18 1079.63 4.22 0.00 0.00 116309.36 4200.26 120109.61 00:12:47.439 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p5 : 5.23 1086.26 4.24 0.00 0.00 115936.37 4200.26 116296.61 00:12:47.439 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p5 : 5.18 1079.34 4.22 0.00 0.00 116143.61 4319.42 115819.99 00:12:47.439 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p6 : 5.23 1085.93 4.24 0.00 0.00 115774.35 4200.26 112483.61 00:12:47.439 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p6 : 5.20 1092.77 4.27 0.00 0.00 115072.67 4200.26 112006.98 00:12:47.439 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x200 00:12:47.439 Malloc2p7 : 5.23 1085.62 4.24 0.00 0.00 115623.38 4140.68 108193.98 00:12:47.439 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x200 length 0x200 00:12:47.439 Malloc2p7 : 5.20 1092.31 4.27 0.00 0.00 114919.80 4110.89 107717.35 00:12:47.439 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x1000 00:12:47.439 TestPT : 5.23 1074.57 4.20 0.00 0.00 116638.36 6970.65 107240.73 00:12:47.439 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x1000 length 0x1000 00:12:47.439 TestPT : 5.20 1077.41 4.21 0.00 0.00 116280.85 9294.20 108193.98 00:12:47.439 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x2000 00:12:47.439 raid0 : 5.23 1084.93 4.24 0.00 0.00 115221.84 4527.94 95325.09 00:12:47.439 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x2000 length 0x2000 00:12:47.439 raid0 : 5.20 1091.33 4.26 0.00 0.00 114542.01 4289.63 97708.22 00:12:47.439 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x2000 00:12:47.439 concat0 : 5.23 1084.60 4.24 0.00 0.00 115039.35 4468.36 91035.46 00:12:47.439 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x2000 length 0x2000 00:12:47.439 concat0 : 5.20 1090.74 4.26 0.00 0.00 114376.62 4676.89 92941.96 00:12:47.439 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x1000 00:12:47.439 raid1 : 5.24 1084.26 4.24 0.00 0.00 114866.94 4825.83 88175.71 00:12:47.439 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x1000 length 0x1000 00:12:47.439 raid1 : 5.21 1090.06 4.26 0.00 0.00 114213.18 4885.41 89605.59 00:12:47.439 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x0 length 0x4e2 00:12:47.439 AIO0 : 5.24 1083.97 4.23 0.00 0.00 114592.72 7983.48 90558.84 00:12:47.439 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:47.439 Verification LBA range: start 0x4e2 length 0x4e2 00:12:47.439 AIO0 : 5.21 1089.43 4.26 0.00 0.00 113936.43 7983.48 91988.71 00:12:47.439 =================================================================================================================== 00:12:47.439 Total : 35542.02 138.84 0.00 0.00 113111.54 1936.29 320292.31 00:12:48.008 ************************************ 00:12:48.008 END TEST bdev_verify 00:12:48.008 ************************************ 00:12:48.008 00:12:48.008 real 0m6.660s 00:12:48.008 user 0m11.570s 00:12:48.008 sys 0m0.595s 00:12:48.008 10:38:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.008 10:38:14 -- common/autotest_common.sh@10 -- # set +x 00:12:48.008 10:38:14 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:48.008 10:38:14 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:12:48.008 10:38:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:48.008 10:38:14 -- common/autotest_common.sh@10 -- # set +x 00:12:48.008 ************************************ 00:12:48.008 START TEST bdev_verify_big_io 00:12:48.008 ************************************ 00:12:48.008 10:38:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:48.008 [2024-07-24 10:38:14.563558] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:48.008 [2024-07-24 10:38:14.564023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121479 ] 00:12:48.267 [2024-07-24 10:38:14.711270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.267 [2024-07-24 10:38:14.817229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.267 [2024-07-24 10:38:14.817236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.525 [2024-07-24 10:38:14.969230] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:48.525 [2024-07-24 10:38:14.969623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:48.525 [2024-07-24 10:38:14.977154] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:48.525 [2024-07-24 10:38:14.977393] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:48.525 [2024-07-24 10:38:14.985288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:48.525 [2024-07-24 10:38:14.985500] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:48.525 [2024-07-24 10:38:14.985671] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:48.525 [2024-07-24 10:38:15.084528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:48.525 [2024-07-24 10:38:15.084961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.525 [2024-07-24 10:38:15.085077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:48.525 [2024-07-24 10:38:15.085392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.525 [2024-07-24 10:38:15.088406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.525 [2024-07-24 10:38:15.088596] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:48.784 [2024-07-24 10:38:15.281598] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.283052] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.284858] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.286604] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.287761] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.289472] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.290745] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.292511] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.293813] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.295616] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.296799] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.298642] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.299883] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.301658] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.303523] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:48.784 [2024-07-24 10:38:15.304703] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:48.785 [2024-07-24 10:38:15.332220] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:48.785 [2024-07-24 10:38:15.334928] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:48.785 Running I/O for 5 seconds... 00:12:55.354 00:12:55.354 Latency(us) 00:12:55.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:55.354 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.354 Verification LBA range: start 0x0 length 0x100 00:12:55.354 Malloc0 : 5.63 331.10 20.69 0.00 0.00 379507.30 23116.33 1166779.11 00:12:55.354 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.354 Verification LBA range: start 0x100 length 0x100 00:12:55.354 Malloc0 : 5.76 280.94 17.56 0.00 0.00 443127.27 32648.84 1441315.37 00:12:55.354 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.354 Verification LBA range: start 0x0 length 0x80 00:12:55.354 Malloc1p0 : 5.63 259.78 16.24 0.00 0.00 477622.69 43611.23 762600.73 00:12:55.354 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.354 Verification LBA range: start 0x80 length 0x80 00:12:55.354 Malloc1p0 : 5.99 105.71 6.61 0.00 0.00 1150483.30 55288.55 2409818.30 00:12:55.354 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.354 Verification LBA range: start 0x0 length 0x80 00:12:55.354 Malloc1p1 : 5.70 153.89 9.62 0.00 0.00 788257.52 42181.35 1845493.76 00:12:55.354 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.354 Verification LBA range: start 0x80 length 0x80 00:12:55.354 Malloc1p1 : 5.99 105.69 6.61 0.00 0.00 1124469.00 56480.12 2409818.30 00:12:55.354 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p0 : 5.63 65.31 4.08 0.00 0.00 459675.62 7983.48 671088.64 00:12:55.355 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p0 : 5.76 56.92 3.56 0.00 0.00 517566.20 11141.12 808356.77 00:12:55.355 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p1 : 5.64 65.30 4.08 0.00 0.00 457666.32 7685.59 655836.63 00:12:55.355 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p1 : 5.76 56.91 3.56 0.00 0.00 514600.37 9889.98 789291.75 00:12:55.355 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p2 : 5.64 65.28 4.08 0.00 0.00 455668.99 8281.37 640584.61 00:12:55.355 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p2 : 5.77 56.89 3.56 0.00 0.00 511598.30 9294.20 770226.73 00:12:55.355 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p3 : 5.64 65.27 4.08 0.00 0.00 453534.13 7506.85 625332.60 00:12:55.355 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p3 : 5.77 56.88 3.56 0.00 0.00 508582.25 11260.28 747348.71 00:12:55.355 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p4 : 5.64 65.26 4.08 0.00 0.00 451540.29 7387.69 610080.58 00:12:55.355 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p4 : 5.77 56.87 3.55 0.00 0.00 505799.51 10426.18 728283.69 00:12:55.355 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p5 : 5.64 65.24 4.08 0.00 0.00 449469.54 7506.85 594828.57 00:12:55.355 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p5 : 5.83 59.84 3.74 0.00 0.00 483315.96 11319.85 709218.68 00:12:55.355 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p6 : 5.70 68.42 4.28 0.00 0.00 430213.75 7804.74 575763.55 00:12:55.355 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p6 : 5.83 59.83 3.74 0.00 0.00 480465.60 10068.71 686340.65 00:12:55.355 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x20 00:12:55.355 Malloc2p7 : 5.70 68.40 4.27 0.00 0.00 428401.35 7357.91 564324.54 00:12:55.355 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x20 length 0x20 00:12:55.355 Malloc2p7 : 5.83 59.82 3.74 0.00 0.00 477777.93 10962.39 667275.64 00:12:55.355 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x100 00:12:55.355 TestPT : 5.85 120.17 7.51 0.00 0.00 958884.14 53143.74 2257298.15 00:12:55.355 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x100 length 0x100 00:12:55.355 TestPT : 6.01 105.32 6.58 0.00 0.00 1060728.82 106764.10 2364062.25 00:12:55.355 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x200 00:12:55.355 raid0 : 5.85 123.84 7.74 0.00 0.00 919697.37 42657.98 2181038.08 00:12:55.355 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x200 length 0x200 00:12:55.355 raid0 : 5.93 117.18 7.32 0.00 0.00 949905.27 48377.48 2379314.27 00:12:55.355 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x200 00:12:55.355 concat0 : 5.82 130.18 8.14 0.00 0.00 863233.94 42896.29 2196290.09 00:12:55.355 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x200 length 0x200 00:12:55.355 concat0 : 5.93 130.46 8.15 0.00 0.00 834347.95 35746.91 2379314.27 00:12:55.355 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x100 00:12:55.355 raid1 : 5.89 140.82 8.80 0.00 0.00 784018.51 11319.85 2211542.11 00:12:55.355 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x100 length 0x100 00:12:55.355 raid1 : 5.99 152.98 9.56 0.00 0.00 698850.45 18469.24 2364062.25 00:12:55.355 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x0 length 0x4e 00:12:55.355 AIO0 : 5.85 143.36 8.96 0.00 0.00 465352.73 2546.97 1288795.23 00:12:55.355 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:55.355 Verification LBA range: start 0x4e length 0x4e 00:12:55.355 AIO0 : 6.07 175.05 10.94 0.00 0.00 366570.45 685.15 1380307.32 00:12:55.355 =================================================================================================================== 00:12:55.355 Total : 3568.90 223.06 0.00 0.00 627010.23 685.15 2409818.30 00:12:55.355 ************************************ 00:12:55.355 END TEST bdev_verify_big_io 00:12:55.355 ************************************ 00:12:55.355 00:12:55.355 real 0m7.403s 00:12:55.355 user 0m13.528s 00:12:55.355 sys 0m0.494s 00:12:55.355 10:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.355 10:38:21 -- common/autotest_common.sh@10 -- # set +x 00:12:55.355 10:38:21 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.355 10:38:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:55.355 10:38:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.355 10:38:21 -- common/autotest_common.sh@10 -- # set +x 00:12:55.355 ************************************ 00:12:55.355 START TEST bdev_write_zeroes 00:12:55.355 ************************************ 00:12:55.355 10:38:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.355 [2024-07-24 10:38:22.027035] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:55.355 [2024-07-24 10:38:22.027251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121599 ] 00:12:55.615 [2024-07-24 10:38:22.170866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.615 [2024-07-24 10:38:22.264175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.873 [2024-07-24 10:38:22.410514] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:55.873 [2024-07-24 10:38:22.410652] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:55.873 [2024-07-24 10:38:22.418416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:55.873 [2024-07-24 10:38:22.418544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:55.873 [2024-07-24 10:38:22.426538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:55.873 [2024-07-24 10:38:22.426632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:55.873 [2024-07-24 10:38:22.426692] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:55.873 [2024-07-24 10:38:22.533102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:55.873 [2024-07-24 10:38:22.533284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.873 [2024-07-24 10:38:22.533382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:55.873 [2024-07-24 10:38:22.533473] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.873 [2024-07-24 10:38:22.536513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.873 [2024-07-24 10:38:22.536602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:56.243 Running I/O for 1 seconds... 00:12:57.178 00:12:57.178 Latency(us) 00:12:57.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.178 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc0 : 1.04 4902.09 19.15 0.00 0.00 26086.25 830.37 46709.29 00:12:57.178 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc1p0 : 1.05 4895.44 19.12 0.00 0.00 26076.55 1131.99 45756.04 00:12:57.178 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc1p1 : 1.05 4888.97 19.10 0.00 0.00 26041.07 1109.64 44564.48 00:12:57.178 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p0 : 1.05 4882.54 19.07 0.00 0.00 26020.18 1012.83 43611.23 00:12:57.178 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p1 : 1.05 4875.87 19.05 0.00 0.00 25992.34 1012.83 42657.98 00:12:57.178 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p2 : 1.05 4869.25 19.02 0.00 0.00 25965.28 997.93 41704.73 00:12:57.178 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p3 : 1.05 4862.87 19.00 0.00 0.00 25944.93 1012.83 40751.48 00:12:57.178 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p4 : 1.05 4856.63 18.97 0.00 0.00 25917.48 1027.72 39798.23 00:12:57.178 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p5 : 1.06 4850.50 18.95 0.00 0.00 25889.83 1109.64 38606.66 00:12:57.178 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p6 : 1.06 4844.07 18.92 0.00 0.00 25865.50 1050.07 37653.41 00:12:57.178 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 Malloc2p7 : 1.06 4837.71 18.90 0.00 0.00 25845.03 1042.62 36700.16 00:12:57.178 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 TestPT : 1.06 4831.63 18.87 0.00 0.00 25812.53 1169.22 35508.60 00:12:57.178 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 raid0 : 1.06 4824.24 18.84 0.00 0.00 25776.34 1832.03 33602.09 00:12:57.178 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 concat0 : 1.06 4817.03 18.82 0.00 0.00 25715.65 1854.37 31695.59 00:12:57.178 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 raid1 : 1.06 4808.06 18.78 0.00 0.00 25636.57 2904.44 28716.68 00:12:57.178 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.178 AIO0 : 1.07 4900.94 19.14 0.00 0.00 25015.18 539.93 28120.90 00:12:57.179 =================================================================================================================== 00:12:57.179 Total : 77747.83 303.70 0.00 0.00 25848.85 539.93 46709.29 00:12:57.745 ************************************ 00:12:57.745 END TEST bdev_write_zeroes 00:12:57.745 ************************************ 00:12:57.745 00:12:57.745 real 0m2.292s 00:12:57.745 user 0m1.726s 00:12:57.745 sys 0m0.364s 00:12:57.745 10:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.745 10:38:24 -- common/autotest_common.sh@10 -- # set +x 00:12:57.745 10:38:24 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.745 10:38:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:57.745 10:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.745 10:38:24 -- common/autotest_common.sh@10 -- # set +x 00:12:57.745 ************************************ 00:12:57.745 START TEST bdev_json_nonenclosed 00:12:57.745 ************************************ 00:12:57.745 10:38:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:57.745 [2024-07-24 10:38:24.385596] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:57.745 [2024-07-24 10:38:24.386378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121649 ] 00:12:58.004 [2024-07-24 10:38:24.533666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.004 [2024-07-24 10:38:24.607201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.004 [2024-07-24 10:38:24.607611] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:58.004 [2024-07-24 10:38:24.607688] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:58.262 00:12:58.262 real 0m0.385s 00:12:58.262 user 0m0.184s 00:12:58.262 sys 0m0.100s 00:12:58.262 10:38:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.262 10:38:24 -- common/autotest_common.sh@10 -- # set +x 00:12:58.262 ************************************ 00:12:58.262 END TEST bdev_json_nonenclosed 00:12:58.262 ************************************ 00:12:58.262 10:38:24 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.262 10:38:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:58.262 10:38:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.262 10:38:24 -- common/autotest_common.sh@10 -- # set +x 00:12:58.262 ************************************ 00:12:58.262 START TEST bdev_json_nonarray 00:12:58.262 ************************************ 00:12:58.262 10:38:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.262 [2024-07-24 10:38:24.816157] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:58.262 [2024-07-24 10:38:24.816432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121671 ] 00:12:58.520 [2024-07-24 10:38:24.953526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.520 [2024-07-24 10:38:25.037409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.521 [2024-07-24 10:38:25.037656] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:58.521 [2024-07-24 10:38:25.037705] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:58.521 00:12:58.521 real 0m0.384s 00:12:58.521 user 0m0.186s 00:12:58.521 sys 0m0.099s 00:12:58.521 ************************************ 00:12:58.521 END TEST bdev_json_nonarray 00:12:58.521 ************************************ 00:12:58.521 10:38:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.521 10:38:25 -- common/autotest_common.sh@10 -- # set +x 00:12:58.521 10:38:25 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:12:58.521 10:38:25 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:12:58.521 10:38:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:58.521 10:38:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.521 10:38:25 -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 ************************************ 00:12:58.779 START TEST bdev_qos 00:12:58.779 ************************************ 00:12:58.779 10:38:25 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:12:58.779 10:38:25 -- bdev/blockdev.sh@444 -- # QOS_PID=121709 00:12:58.779 Process qos testing pid: 121709 00:12:58.779 10:38:25 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 121709' 00:12:58.779 10:38:25 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:58.779 10:38:25 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:58.779 10:38:25 -- bdev/blockdev.sh@447 -- # waitforlisten 121709 00:12:58.779 10:38:25 -- common/autotest_common.sh@819 -- # '[' -z 121709 ']' 00:12:58.779 10:38:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.779 10:38:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:58.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.779 10:38:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.779 10:38:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:58.779 10:38:25 -- common/autotest_common.sh@10 -- # set +x 00:12:58.779 [2024-07-24 10:38:25.258469] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:58.779 [2024-07-24 10:38:25.259269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121709 ] 00:12:58.779 [2024-07-24 10:38:25.410181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.037 [2024-07-24 10:38:25.545919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.604 10:38:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:59.604 10:38:26 -- common/autotest_common.sh@852 -- # return 0 00:12:59.604 10:38:26 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:59.604 10:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.604 10:38:26 -- common/autotest_common.sh@10 -- # set +x 00:12:59.863 Malloc_0 00:12:59.875 10:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.875 10:38:26 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:12:59.875 10:38:26 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:12:59.875 10:38:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:59.875 10:38:26 -- common/autotest_common.sh@889 -- # local i 00:12:59.875 10:38:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:59.875 10:38:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:59.875 10:38:26 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:59.875 10:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.875 10:38:26 -- common/autotest_common.sh@10 -- # set +x 00:12:59.875 10:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.875 10:38:26 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:59.875 10:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.875 10:38:26 -- common/autotest_common.sh@10 -- # set +x 00:12:59.875 [ 00:12:59.875 { 00:12:59.875 "name": "Malloc_0", 00:12:59.875 "aliases": [ 00:12:59.875 "ced9ccde-c17c-4532-8b04-9c93083914d2" 00:12:59.875 ], 00:12:59.875 "product_name": "Malloc disk", 00:12:59.875 "block_size": 512, 00:12:59.875 "num_blocks": 262144, 00:12:59.875 "uuid": "ced9ccde-c17c-4532-8b04-9c93083914d2", 00:12:59.875 "assigned_rate_limits": { 00:12:59.875 "rw_ios_per_sec": 0, 00:12:59.875 "rw_mbytes_per_sec": 0, 00:12:59.875 "r_mbytes_per_sec": 0, 00:12:59.875 "w_mbytes_per_sec": 0 00:12:59.875 }, 00:12:59.875 "claimed": false, 00:12:59.875 "zoned": false, 00:12:59.875 "supported_io_types": { 00:12:59.875 "read": true, 00:12:59.875 "write": true, 00:12:59.875 "unmap": true, 00:12:59.875 "write_zeroes": true, 00:12:59.875 "flush": true, 00:12:59.875 "reset": true, 00:12:59.875 "compare": false, 00:12:59.875 "compare_and_write": false, 00:12:59.875 "abort": true, 00:12:59.875 "nvme_admin": false, 00:12:59.875 "nvme_io": false 00:12:59.875 }, 00:12:59.875 "memory_domains": [ 00:12:59.875 { 00:12:59.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.875 "dma_device_type": 2 00:12:59.875 } 00:12:59.875 ], 00:12:59.875 "driver_specific": {} 00:12:59.875 } 00:12:59.875 ] 00:12:59.875 10:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.875 10:38:26 -- common/autotest_common.sh@895 -- # return 0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:59.875 10:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.875 10:38:26 -- common/autotest_common.sh@10 -- # set +x 00:12:59.875 Null_1 00:12:59.875 10:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.875 10:38:26 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:12:59.875 10:38:26 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:12:59.875 10:38:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:59.875 10:38:26 -- common/autotest_common.sh@889 -- # local i 00:12:59.875 10:38:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:59.875 10:38:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:59.875 10:38:26 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:12:59.875 10:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.875 10:38:26 -- common/autotest_common.sh@10 -- # set +x 00:12:59.875 10:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.875 10:38:26 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:59.875 10:38:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:59.875 10:38:26 -- common/autotest_common.sh@10 -- # set +x 00:12:59.875 [ 00:12:59.875 { 00:12:59.875 "name": "Null_1", 00:12:59.875 "aliases": [ 00:12:59.875 "e43ffaaa-7203-4b16-9738-b2f744404579" 00:12:59.875 ], 00:12:59.875 "product_name": "Null disk", 00:12:59.875 "block_size": 512, 00:12:59.875 "num_blocks": 262144, 00:12:59.875 "uuid": "e43ffaaa-7203-4b16-9738-b2f744404579", 00:12:59.875 "assigned_rate_limits": { 00:12:59.875 "rw_ios_per_sec": 0, 00:12:59.875 "rw_mbytes_per_sec": 0, 00:12:59.875 "r_mbytes_per_sec": 0, 00:12:59.875 "w_mbytes_per_sec": 0 00:12:59.875 }, 00:12:59.875 "claimed": false, 00:12:59.875 "zoned": false, 00:12:59.875 "supported_io_types": { 00:12:59.875 "read": true, 00:12:59.875 "write": true, 00:12:59.875 "unmap": false, 00:12:59.875 "write_zeroes": true, 00:12:59.875 "flush": false, 00:12:59.875 "reset": true, 00:12:59.875 "compare": false, 00:12:59.875 "compare_and_write": false, 00:12:59.875 "abort": true, 00:12:59.875 "nvme_admin": false, 00:12:59.875 "nvme_io": false 00:12:59.875 }, 00:12:59.875 "driver_specific": {} 00:12:59.875 } 00:12:59.875 ] 00:12:59.875 10:38:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:59.875 10:38:26 -- common/autotest_common.sh@895 -- # return 0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@455 -- # qos_function_test 00:12:59.875 10:38:26 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:12:59.875 10:38:26 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:59.875 10:38:26 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:12:59.875 10:38:26 -- bdev/blockdev.sh@410 -- # local io_result=0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:59.875 10:38:26 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:59.875 10:38:26 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:59.875 10:38:26 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:59.875 10:38:26 -- bdev/blockdev.sh@376 -- # tail -1 00:12:59.875 Running I/O for 60 seconds... 00:13:05.144 10:38:31 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 71524.32 286097.28 0.00 0.00 289792.00 0.00 0.00 ' 00:13:05.144 10:38:31 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:05.144 10:38:31 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:05.144 10:38:31 -- bdev/blockdev.sh@378 -- # iostat_result=71524.32 00:13:05.144 10:38:31 -- bdev/blockdev.sh@383 -- # echo 71524 00:13:05.144 10:38:31 -- bdev/blockdev.sh@414 -- # io_result=71524 00:13:05.144 10:38:31 -- bdev/blockdev.sh@416 -- # iops_limit=17000 00:13:05.144 10:38:31 -- bdev/blockdev.sh@417 -- # '[' 17000 -gt 1000 ']' 00:13:05.144 10:38:31 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:13:05.144 10:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.144 10:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:05.144 10:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.144 10:38:31 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:13:05.144 10:38:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:05.144 10:38:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.144 10:38:31 -- common/autotest_common.sh@10 -- # set +x 00:13:05.144 ************************************ 00:13:05.144 START TEST bdev_qos_iops 00:13:05.144 ************************************ 00:13:05.144 10:38:31 -- common/autotest_common.sh@1104 -- # run_qos_test 17000 IOPS Malloc_0 00:13:05.144 10:38:31 -- bdev/blockdev.sh@387 -- # local qos_limit=17000 00:13:05.144 10:38:31 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:05.144 10:38:31 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:05.144 10:38:31 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:05.144 10:38:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:05.144 10:38:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:05.144 10:38:31 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:05.144 10:38:31 -- bdev/blockdev.sh@376 -- # tail -1 00:13:05.144 10:38:31 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:10.432 10:38:36 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 16981.04 67924.17 0.00 0.00 69156.00 0.00 0.00 ' 00:13:10.432 10:38:36 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:10.432 10:38:36 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:10.432 10:38:36 -- bdev/blockdev.sh@378 -- # iostat_result=16981.04 00:13:10.432 10:38:36 -- bdev/blockdev.sh@383 -- # echo 16981 00:13:10.432 10:38:36 -- bdev/blockdev.sh@390 -- # qos_result=16981 00:13:10.432 10:38:36 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:10.432 10:38:36 -- bdev/blockdev.sh@394 -- # lower_limit=15300 00:13:10.432 10:38:36 -- bdev/blockdev.sh@395 -- # upper_limit=18700 00:13:10.432 10:38:36 -- bdev/blockdev.sh@398 -- # '[' 16981 -lt 15300 ']' 00:13:10.432 10:38:36 -- bdev/blockdev.sh@398 -- # '[' 16981 -gt 18700 ']' 00:13:10.432 00:13:10.432 real 0m5.214s 00:13:10.432 user 0m0.118s 00:13:10.432 sys 0m0.029s 00:13:10.432 ************************************ 00:13:10.433 10:38:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:10.433 10:38:36 -- common/autotest_common.sh@10 -- # set +x 00:13:10.433 END TEST bdev_qos_iops 00:13:10.433 ************************************ 00:13:10.433 10:38:36 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:10.433 10:38:36 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:10.433 10:38:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:10.433 10:38:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:10.433 10:38:36 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:10.433 10:38:36 -- bdev/blockdev.sh@376 -- # tail -1 00:13:10.433 10:38:36 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:15.700 10:38:42 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 27236.75 108947.02 0.00 0.00 110592.00 0.00 0.00 ' 00:13:15.700 10:38:42 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:15.700 10:38:42 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:15.700 10:38:42 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:15.700 10:38:42 -- bdev/blockdev.sh@380 -- # iostat_result=110592.00 00:13:15.700 10:38:42 -- bdev/blockdev.sh@383 -- # echo 110592 00:13:15.700 10:38:42 -- bdev/blockdev.sh@425 -- # bw_limit=110592 00:13:15.700 10:38:42 -- bdev/blockdev.sh@426 -- # bw_limit=10 00:13:15.700 10:38:42 -- bdev/blockdev.sh@427 -- # '[' 10 -lt 2 ']' 00:13:15.701 10:38:42 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:13:15.701 10:38:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.701 10:38:42 -- common/autotest_common.sh@10 -- # set +x 00:13:15.701 10:38:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.701 10:38:42 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:13:15.701 10:38:42 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:15.701 10:38:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.701 10:38:42 -- common/autotest_common.sh@10 -- # set +x 00:13:15.701 ************************************ 00:13:15.701 START TEST bdev_qos_bw 00:13:15.701 ************************************ 00:13:15.701 10:38:42 -- common/autotest_common.sh@1104 -- # run_qos_test 10 BANDWIDTH Null_1 00:13:15.701 10:38:42 -- bdev/blockdev.sh@387 -- # local qos_limit=10 00:13:15.701 10:38:42 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:15.701 10:38:42 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:15.701 10:38:42 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:15.701 10:38:42 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:15.701 10:38:42 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:15.701 10:38:42 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:15.701 10:38:42 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:15.701 10:38:42 -- bdev/blockdev.sh@376 -- # tail -1 00:13:20.969 10:38:47 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2561.52 10246.10 0.00 0.00 10524.00 0.00 0.00 ' 00:13:20.969 10:38:47 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:20.969 10:38:47 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:20.969 10:38:47 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:20.969 10:38:47 -- bdev/blockdev.sh@380 -- # iostat_result=10524.00 00:13:20.969 10:38:47 -- bdev/blockdev.sh@383 -- # echo 10524 00:13:20.969 10:38:47 -- bdev/blockdev.sh@390 -- # qos_result=10524 00:13:20.969 10:38:47 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:20.969 10:38:47 -- bdev/blockdev.sh@392 -- # qos_limit=10240 00:13:20.969 10:38:47 -- bdev/blockdev.sh@394 -- # lower_limit=9216 00:13:20.969 10:38:47 -- bdev/blockdev.sh@395 -- # upper_limit=11264 00:13:20.969 10:38:47 -- bdev/blockdev.sh@398 -- # '[' 10524 -lt 9216 ']' 00:13:20.969 10:38:47 -- bdev/blockdev.sh@398 -- # '[' 10524 -gt 11264 ']' 00:13:20.969 00:13:20.969 real 0m5.271s 00:13:20.969 user 0m0.121s 00:13:20.969 sys 0m0.051s 00:13:20.969 10:38:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.969 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:20.969 ************************************ 00:13:20.969 END TEST bdev_qos_bw 00:13:20.969 ************************************ 00:13:20.969 10:38:47 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:20.969 10:38:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:20.969 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:20.969 10:38:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:20.969 10:38:47 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:20.969 10:38:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:20.969 10:38:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:20.969 10:38:47 -- common/autotest_common.sh@10 -- # set +x 00:13:20.969 ************************************ 00:13:20.970 START TEST bdev_qos_ro_bw 00:13:20.970 ************************************ 00:13:20.970 10:38:47 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:20.970 10:38:47 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:20.970 10:38:47 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:20.970 10:38:47 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:20.970 10:38:47 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:20.970 10:38:47 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:20.970 10:38:47 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:20.970 10:38:47 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:20.970 10:38:47 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:20.970 10:38:47 -- bdev/blockdev.sh@376 -- # tail -1 00:13:26.233 10:38:52 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.63 2046.51 0.00 0.00 2060.00 0.00 0.00 ' 00:13:26.233 10:38:52 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:26.233 10:38:52 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:26.233 10:38:52 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:26.233 10:38:52 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:13:26.233 10:38:52 -- bdev/blockdev.sh@383 -- # echo 2060 00:13:26.233 10:38:52 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:13:26.234 10:38:52 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:26.234 10:38:52 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:26.234 10:38:52 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:26.234 10:38:52 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:26.234 10:38:52 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:13:26.234 10:38:52 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:13:26.234 00:13:26.234 real 0m5.152s 00:13:26.234 user 0m0.108s 00:13:26.234 sys 0m0.018s 00:13:26.234 10:38:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.234 ************************************ 00:13:26.234 END TEST bdev_qos_ro_bw 00:13:26.234 ************************************ 00:13:26.234 10:38:52 -- common/autotest_common.sh@10 -- # set +x 00:13:26.234 10:38:52 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:26.234 10:38:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.234 10:38:52 -- common/autotest_common.sh@10 -- # set +x 00:13:26.812 10:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.812 10:38:53 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:26.812 10:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.812 10:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:26.812 00:13:26.812 Latency(us) 00:13:26.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.812 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:26.812 Malloc_0 : 26.71 23332.67 91.14 0.00 0.00 10869.05 2800.17 507129.48 00:13:26.812 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:26.812 Null_1 : 26.84 24720.24 96.56 0.00 0.00 10331.99 863.88 135361.63 00:13:26.812 =================================================================================================================== 00:13:26.812 Total : 48052.91 187.71 0.00 0.00 10592.08 863.88 507129.48 00:13:26.812 0 00:13:26.812 10:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.812 10:38:53 -- bdev/blockdev.sh@459 -- # killprocess 121709 00:13:26.812 10:38:53 -- common/autotest_common.sh@926 -- # '[' -z 121709 ']' 00:13:26.812 10:38:53 -- common/autotest_common.sh@930 -- # kill -0 121709 00:13:26.812 10:38:53 -- common/autotest_common.sh@931 -- # uname 00:13:26.812 10:38:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:26.812 10:38:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121709 00:13:26.812 10:38:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:26.812 10:38:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:26.812 killing process with pid 121709 00:13:26.812 10:38:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121709' 00:13:26.812 10:38:53 -- common/autotest_common.sh@945 -- # kill 121709 00:13:26.812 Received shutdown signal, test time was about 26.882272 seconds 00:13:26.812 00:13:26.812 Latency(us) 00:13:26.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.812 =================================================================================================================== 00:13:26.812 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.812 10:38:53 -- common/autotest_common.sh@950 -- # wait 121709 00:13:27.071 10:38:53 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:27.071 00:13:27.071 real 0m28.517s 00:13:27.071 user 0m29.312s 00:13:27.071 sys 0m0.682s 00:13:27.071 10:38:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.071 10:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:27.071 ************************************ 00:13:27.071 END TEST bdev_qos 00:13:27.071 ************************************ 00:13:27.329 10:38:53 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:27.329 10:38:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:27.329 10:38:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:27.329 10:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:27.329 ************************************ 00:13:27.329 START TEST bdev_qd_sampling 00:13:27.329 ************************************ 00:13:27.329 10:38:53 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:13:27.329 10:38:53 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:27.329 10:38:53 -- bdev/blockdev.sh@539 -- # QD_PID=122174 00:13:27.329 Process bdev QD sampling period testing pid: 122174 00:13:27.329 10:38:53 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 122174' 00:13:27.329 10:38:53 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:27.329 10:38:53 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:27.329 10:38:53 -- bdev/blockdev.sh@542 -- # waitforlisten 122174 00:13:27.329 10:38:53 -- common/autotest_common.sh@819 -- # '[' -z 122174 ']' 00:13:27.329 10:38:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.329 10:38:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:27.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.329 10:38:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.329 10:38:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:27.329 10:38:53 -- common/autotest_common.sh@10 -- # set +x 00:13:27.329 [2024-07-24 10:38:53.858046] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:27.329 [2024-07-24 10:38:53.858416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122174 ] 00:13:27.588 [2024-07-24 10:38:54.019305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:27.588 [2024-07-24 10:38:54.116225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.588 [2024-07-24 10:38:54.116231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.523 10:38:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:28.523 10:38:54 -- common/autotest_common.sh@852 -- # return 0 00:13:28.523 10:38:54 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:28.523 10:38:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.523 10:38:54 -- common/autotest_common.sh@10 -- # set +x 00:13:28.523 Malloc_QD 00:13:28.523 10:38:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.523 10:38:54 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:28.523 10:38:54 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:13:28.523 10:38:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:28.523 10:38:54 -- common/autotest_common.sh@889 -- # local i 00:13:28.523 10:38:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:28.523 10:38:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:28.523 10:38:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:28.523 10:38:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.523 10:38:54 -- common/autotest_common.sh@10 -- # set +x 00:13:28.523 10:38:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.523 10:38:54 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:28.523 10:38:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:28.523 10:38:54 -- common/autotest_common.sh@10 -- # set +x 00:13:28.523 [ 00:13:28.523 { 00:13:28.523 "name": "Malloc_QD", 00:13:28.523 "aliases": [ 00:13:28.523 "a4ec644c-6712-4228-94f3-65a0370ae549" 00:13:28.523 ], 00:13:28.523 "product_name": "Malloc disk", 00:13:28.523 "block_size": 512, 00:13:28.523 "num_blocks": 262144, 00:13:28.523 "uuid": "a4ec644c-6712-4228-94f3-65a0370ae549", 00:13:28.523 "assigned_rate_limits": { 00:13:28.523 "rw_ios_per_sec": 0, 00:13:28.523 "rw_mbytes_per_sec": 0, 00:13:28.523 "r_mbytes_per_sec": 0, 00:13:28.523 "w_mbytes_per_sec": 0 00:13:28.523 }, 00:13:28.523 "claimed": false, 00:13:28.523 "zoned": false, 00:13:28.523 "supported_io_types": { 00:13:28.523 "read": true, 00:13:28.523 "write": true, 00:13:28.523 "unmap": true, 00:13:28.523 "write_zeroes": true, 00:13:28.523 "flush": true, 00:13:28.523 "reset": true, 00:13:28.523 "compare": false, 00:13:28.523 "compare_and_write": false, 00:13:28.523 "abort": true, 00:13:28.523 "nvme_admin": false, 00:13:28.523 "nvme_io": false 00:13:28.523 }, 00:13:28.523 "memory_domains": [ 00:13:28.523 { 00:13:28.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.523 "dma_device_type": 2 00:13:28.523 } 00:13:28.523 ], 00:13:28.523 "driver_specific": {} 00:13:28.523 } 00:13:28.523 ] 00:13:28.523 10:38:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:28.523 10:38:54 -- common/autotest_common.sh@895 -- # return 0 00:13:28.523 10:38:54 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:28.523 10:38:54 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:28.523 Running I/O for 5 seconds... 00:13:30.426 10:38:56 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:30.426 10:38:56 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:30.426 10:38:56 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:30.426 10:38:56 -- bdev/blockdev.sh@519 -- # local iostats 00:13:30.426 10:38:56 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:30.426 10:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.426 10:38:56 -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 10:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.426 10:38:56 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:30.426 10:38:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.426 10:38:56 -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 10:38:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.426 10:38:56 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:30.426 "tick_rate": 2200000000, 00:13:30.426 "ticks": 1620645924268, 00:13:30.426 "bdevs": [ 00:13:30.426 { 00:13:30.426 "name": "Malloc_QD", 00:13:30.426 "bytes_read": 484479488, 00:13:30.426 "num_read_ops": 118275, 00:13:30.426 "bytes_written": 0, 00:13:30.426 "num_write_ops": 0, 00:13:30.426 "bytes_unmapped": 0, 00:13:30.426 "num_unmap_ops": 0, 00:13:30.426 "bytes_copied": 0, 00:13:30.426 "num_copy_ops": 0, 00:13:30.426 "read_latency_ticks": 2152034507216, 00:13:30.426 "max_read_latency_ticks": 25745007, 00:13:30.426 "min_read_latency_ticks": 453112, 00:13:30.426 "write_latency_ticks": 0, 00:13:30.426 "max_write_latency_ticks": 0, 00:13:30.426 "min_write_latency_ticks": 0, 00:13:30.427 "unmap_latency_ticks": 0, 00:13:30.427 "max_unmap_latency_ticks": 0, 00:13:30.427 "min_unmap_latency_ticks": 0, 00:13:30.427 "copy_latency_ticks": 0, 00:13:30.427 "max_copy_latency_ticks": 0, 00:13:30.427 "min_copy_latency_ticks": 0, 00:13:30.427 "io_error": {}, 00:13:30.427 "queue_depth_polling_period": 10, 00:13:30.427 "queue_depth": 512, 00:13:30.427 "io_time": 20, 00:13:30.427 "weighted_io_time": 10240 00:13:30.427 } 00:13:30.427 ] 00:13:30.427 }' 00:13:30.427 10:38:56 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:30.427 10:38:57 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:30.427 10:38:57 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:30.427 10:38:57 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:30.427 10:38:57 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:30.427 10:38:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.427 10:38:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.427 00:13:30.427 Latency(us) 00:13:30.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.427 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:30.427 Malloc_QD : 2.01 30477.06 119.05 0.00 0.00 8370.79 2100.13 10902.81 00:13:30.427 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:30.427 Malloc_QD : 2.01 31362.33 122.51 0.00 0.00 8136.75 1392.64 11736.90 00:13:30.427 =================================================================================================================== 00:13:30.427 Total : 61839.39 241.56 0.00 0.00 8252.08 1392.64 11736.90 00:13:30.427 0 00:13:30.427 10:38:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.427 10:38:57 -- bdev/blockdev.sh@552 -- # killprocess 122174 00:13:30.427 10:38:57 -- common/autotest_common.sh@926 -- # '[' -z 122174 ']' 00:13:30.427 10:38:57 -- common/autotest_common.sh@930 -- # kill -0 122174 00:13:30.427 10:38:57 -- common/autotest_common.sh@931 -- # uname 00:13:30.427 10:38:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:30.427 10:38:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122174 00:13:30.685 10:38:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:30.685 10:38:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:30.685 killing process with pid 122174 00:13:30.685 10:38:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122174' 00:13:30.685 Received shutdown signal, test time was about 2.068327 seconds 00:13:30.685 00:13:30.685 Latency(us) 00:13:30.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.685 =================================================================================================================== 00:13:30.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:30.685 10:38:57 -- common/autotest_common.sh@945 -- # kill 122174 00:13:30.685 10:38:57 -- common/autotest_common.sh@950 -- # wait 122174 00:13:30.943 10:38:57 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:30.943 00:13:30.943 real 0m3.627s 00:13:30.943 user 0m7.060s 00:13:30.943 sys 0m0.380s 00:13:30.943 10:38:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.943 ************************************ 00:13:30.943 10:38:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.943 END TEST bdev_qd_sampling 00:13:30.943 ************************************ 00:13:30.943 10:38:57 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:30.943 10:38:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:30.943 10:38:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:30.943 10:38:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.943 ************************************ 00:13:30.943 START TEST bdev_error 00:13:30.943 ************************************ 00:13:30.943 10:38:57 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:13:30.943 10:38:57 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:30.943 10:38:57 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:30.943 10:38:57 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:30.943 10:38:57 -- bdev/blockdev.sh@470 -- # ERR_PID=122254 00:13:30.943 Process error testing pid: 122254 00:13:30.943 10:38:57 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 122254' 00:13:30.943 10:38:57 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:30.943 10:38:57 -- bdev/blockdev.sh@472 -- # waitforlisten 122254 00:13:30.943 10:38:57 -- common/autotest_common.sh@819 -- # '[' -z 122254 ']' 00:13:30.943 10:38:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.943 10:38:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:30.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.943 10:38:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.943 10:38:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:30.943 10:38:57 -- common/autotest_common.sh@10 -- # set +x 00:13:30.943 [2024-07-24 10:38:57.521567] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:30.943 [2024-07-24 10:38:57.521849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122254 ] 00:13:31.214 [2024-07-24 10:38:57.670661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.214 [2024-07-24 10:38:57.776010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.149 10:38:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.149 10:38:58 -- common/autotest_common.sh@852 -- # return 0 00:13:32.149 10:38:58 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:32.149 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.149 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.149 Dev_1 00:13:32.149 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.149 10:38:58 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:32.149 10:38:58 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:32.149 10:38:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:32.149 10:38:58 -- common/autotest_common.sh@889 -- # local i 00:13:32.149 10:38:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:32.149 10:38:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:32.149 10:38:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:32.149 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.149 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.149 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.149 10:38:58 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:32.149 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.149 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.149 [ 00:13:32.149 { 00:13:32.149 "name": "Dev_1", 00:13:32.149 "aliases": [ 00:13:32.149 "eb9abbaf-b5a3-4525-b3bb-7e6a7393d352" 00:13:32.149 ], 00:13:32.149 "product_name": "Malloc disk", 00:13:32.149 "block_size": 512, 00:13:32.149 "num_blocks": 262144, 00:13:32.149 "uuid": "eb9abbaf-b5a3-4525-b3bb-7e6a7393d352", 00:13:32.149 "assigned_rate_limits": { 00:13:32.149 "rw_ios_per_sec": 0, 00:13:32.149 "rw_mbytes_per_sec": 0, 00:13:32.149 "r_mbytes_per_sec": 0, 00:13:32.149 "w_mbytes_per_sec": 0 00:13:32.149 }, 00:13:32.149 "claimed": false, 00:13:32.149 "zoned": false, 00:13:32.149 "supported_io_types": { 00:13:32.149 "read": true, 00:13:32.149 "write": true, 00:13:32.149 "unmap": true, 00:13:32.149 "write_zeroes": true, 00:13:32.149 "flush": true, 00:13:32.149 "reset": true, 00:13:32.149 "compare": false, 00:13:32.149 "compare_and_write": false, 00:13:32.149 "abort": true, 00:13:32.149 "nvme_admin": false, 00:13:32.149 "nvme_io": false 00:13:32.149 }, 00:13:32.149 "memory_domains": [ 00:13:32.149 { 00:13:32.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.149 "dma_device_type": 2 00:13:32.149 } 00:13:32.149 ], 00:13:32.149 "driver_specific": {} 00:13:32.149 } 00:13:32.149 ] 00:13:32.149 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.149 10:38:58 -- common/autotest_common.sh@895 -- # return 0 00:13:32.149 10:38:58 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:32.149 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.149 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.149 true 00:13:32.149 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.149 10:38:58 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:32.150 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.150 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.150 Dev_2 00:13:32.150 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.150 10:38:58 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:32.150 10:38:58 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:32.150 10:38:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:32.150 10:38:58 -- common/autotest_common.sh@889 -- # local i 00:13:32.150 10:38:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:32.150 10:38:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:32.150 10:38:58 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:32.150 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.150 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.150 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.150 10:38:58 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:32.150 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.150 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.150 [ 00:13:32.150 { 00:13:32.150 "name": "Dev_2", 00:13:32.150 "aliases": [ 00:13:32.150 "b14bb888-1e9a-4efb-aac7-c2620cdb822f" 00:13:32.150 ], 00:13:32.150 "product_name": "Malloc disk", 00:13:32.150 "block_size": 512, 00:13:32.150 "num_blocks": 262144, 00:13:32.150 "uuid": "b14bb888-1e9a-4efb-aac7-c2620cdb822f", 00:13:32.150 "assigned_rate_limits": { 00:13:32.150 "rw_ios_per_sec": 0, 00:13:32.150 "rw_mbytes_per_sec": 0, 00:13:32.150 "r_mbytes_per_sec": 0, 00:13:32.150 "w_mbytes_per_sec": 0 00:13:32.150 }, 00:13:32.150 "claimed": false, 00:13:32.150 "zoned": false, 00:13:32.150 "supported_io_types": { 00:13:32.150 "read": true, 00:13:32.150 "write": true, 00:13:32.150 "unmap": true, 00:13:32.150 "write_zeroes": true, 00:13:32.150 "flush": true, 00:13:32.150 "reset": true, 00:13:32.150 "compare": false, 00:13:32.150 "compare_and_write": false, 00:13:32.150 "abort": true, 00:13:32.150 "nvme_admin": false, 00:13:32.150 "nvme_io": false 00:13:32.150 }, 00:13:32.150 "memory_domains": [ 00:13:32.150 { 00:13:32.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.150 "dma_device_type": 2 00:13:32.150 } 00:13:32.150 ], 00:13:32.150 "driver_specific": {} 00:13:32.150 } 00:13:32.150 ] 00:13:32.150 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.150 10:38:58 -- common/autotest_common.sh@895 -- # return 0 00:13:32.150 10:38:58 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:32.150 10:38:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.150 10:38:58 -- common/autotest_common.sh@10 -- # set +x 00:13:32.150 10:38:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.150 10:38:58 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:32.150 10:38:58 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:32.150 Running I/O for 5 seconds... 00:13:33.086 10:38:59 -- bdev/blockdev.sh@485 -- # kill -0 122254 00:13:33.086 Process is existed as continue on error is set. Pid: 122254 00:13:33.086 10:38:59 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 122254' 00:13:33.086 10:38:59 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:33.086 10:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.086 10:38:59 -- common/autotest_common.sh@10 -- # set +x 00:13:33.086 10:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.086 10:38:59 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:33.086 10:38:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.086 10:38:59 -- common/autotest_common.sh@10 -- # set +x 00:13:33.086 10:38:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.086 10:38:59 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:33.086 Timeout while waiting for response: 00:13:33.086 00:13:33.086 00:13:37.276 00:13:37.276 Latency(us) 00:13:37.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.276 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:37.276 EE_Dev_1 : 0.90 39049.02 152.54 5.56 0.00 406.75 177.80 819.20 00:13:37.276 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:37.276 Dev_2 : 5.00 84537.16 330.22 0.00 0.00 186.37 87.97 34317.03 00:13:37.276 =================================================================================================================== 00:13:37.276 Total : 123586.18 482.76 5.56 0.00 203.28 87.97 34317.03 00:13:38.208 10:39:04 -- bdev/blockdev.sh@497 -- # killprocess 122254 00:13:38.208 10:39:04 -- common/autotest_common.sh@926 -- # '[' -z 122254 ']' 00:13:38.208 10:39:04 -- common/autotest_common.sh@930 -- # kill -0 122254 00:13:38.208 10:39:04 -- common/autotest_common.sh@931 -- # uname 00:13:38.208 10:39:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:38.208 10:39:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122254 00:13:38.208 10:39:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:38.208 10:39:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:38.208 killing process with pid 122254 00:13:38.208 10:39:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122254' 00:13:38.208 Received shutdown signal, test time was about 5.000000 seconds 00:13:38.208 00:13:38.208 Latency(us) 00:13:38.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.208 =================================================================================================================== 00:13:38.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.208 10:39:04 -- common/autotest_common.sh@945 -- # kill 122254 00:13:38.208 10:39:04 -- common/autotest_common.sh@950 -- # wait 122254 00:13:38.466 10:39:05 -- bdev/blockdev.sh@501 -- # ERR_PID=122363 00:13:38.466 Process error testing pid: 122363 00:13:38.466 10:39:05 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 122363' 00:13:38.466 10:39:05 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:38.466 10:39:05 -- bdev/blockdev.sh@503 -- # waitforlisten 122363 00:13:38.466 10:39:05 -- common/autotest_common.sh@819 -- # '[' -z 122363 ']' 00:13:38.466 10:39:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.466 10:39:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.466 10:39:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.466 10:39:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.466 10:39:05 -- common/autotest_common.sh@10 -- # set +x 00:13:38.724 [2024-07-24 10:39:05.184312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:38.724 [2024-07-24 10:39:05.184563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122363 ] 00:13:38.724 [2024-07-24 10:39:05.332048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.982 [2024-07-24 10:39:05.429815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.546 10:39:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.546 10:39:06 -- common/autotest_common.sh@852 -- # return 0 00:13:39.546 10:39:06 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:39.546 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.546 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.546 Dev_1 00:13:39.546 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.546 10:39:06 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:39.546 10:39:06 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:13:39.546 10:39:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:39.546 10:39:06 -- common/autotest_common.sh@889 -- # local i 00:13:39.546 10:39:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:39.546 10:39:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:39.546 10:39:06 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:39.547 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.547 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.547 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.547 10:39:06 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:39.547 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.547 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.547 [ 00:13:39.547 { 00:13:39.547 "name": "Dev_1", 00:13:39.547 "aliases": [ 00:13:39.547 "5516f07e-c778-4748-aaa6-7646b549b587" 00:13:39.547 ], 00:13:39.547 "product_name": "Malloc disk", 00:13:39.547 "block_size": 512, 00:13:39.547 "num_blocks": 262144, 00:13:39.547 "uuid": "5516f07e-c778-4748-aaa6-7646b549b587", 00:13:39.547 "assigned_rate_limits": { 00:13:39.547 "rw_ios_per_sec": 0, 00:13:39.547 "rw_mbytes_per_sec": 0, 00:13:39.547 "r_mbytes_per_sec": 0, 00:13:39.547 "w_mbytes_per_sec": 0 00:13:39.547 }, 00:13:39.547 "claimed": false, 00:13:39.547 "zoned": false, 00:13:39.547 "supported_io_types": { 00:13:39.547 "read": true, 00:13:39.547 "write": true, 00:13:39.547 "unmap": true, 00:13:39.547 "write_zeroes": true, 00:13:39.547 "flush": true, 00:13:39.547 "reset": true, 00:13:39.547 "compare": false, 00:13:39.547 "compare_and_write": false, 00:13:39.547 "abort": true, 00:13:39.547 "nvme_admin": false, 00:13:39.547 "nvme_io": false 00:13:39.547 }, 00:13:39.547 "memory_domains": [ 00:13:39.547 { 00:13:39.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.547 "dma_device_type": 2 00:13:39.547 } 00:13:39.547 ], 00:13:39.547 "driver_specific": {} 00:13:39.547 } 00:13:39.547 ] 00:13:39.804 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.804 10:39:06 -- common/autotest_common.sh@895 -- # return 0 00:13:39.804 10:39:06 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:39.804 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.804 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.804 true 00:13:39.804 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.804 10:39:06 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:39.804 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.804 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.804 Dev_2 00:13:39.804 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.804 10:39:06 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:39.804 10:39:06 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:13:39.804 10:39:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:39.804 10:39:06 -- common/autotest_common.sh@889 -- # local i 00:13:39.804 10:39:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:39.804 10:39:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:39.804 10:39:06 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:39.805 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.805 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.805 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.805 10:39:06 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:39.805 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.805 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.805 [ 00:13:39.805 { 00:13:39.805 "name": "Dev_2", 00:13:39.805 "aliases": [ 00:13:39.805 "e407d4c2-13a0-4c79-91e4-3549661ee0ea" 00:13:39.805 ], 00:13:39.805 "product_name": "Malloc disk", 00:13:39.805 "block_size": 512, 00:13:39.805 "num_blocks": 262144, 00:13:39.805 "uuid": "e407d4c2-13a0-4c79-91e4-3549661ee0ea", 00:13:39.805 "assigned_rate_limits": { 00:13:39.805 "rw_ios_per_sec": 0, 00:13:39.805 "rw_mbytes_per_sec": 0, 00:13:39.805 "r_mbytes_per_sec": 0, 00:13:39.805 "w_mbytes_per_sec": 0 00:13:39.805 }, 00:13:39.805 "claimed": false, 00:13:39.805 "zoned": false, 00:13:39.805 "supported_io_types": { 00:13:39.805 "read": true, 00:13:39.805 "write": true, 00:13:39.805 "unmap": true, 00:13:39.805 "write_zeroes": true, 00:13:39.805 "flush": true, 00:13:39.805 "reset": true, 00:13:39.805 "compare": false, 00:13:39.805 "compare_and_write": false, 00:13:39.805 "abort": true, 00:13:39.805 "nvme_admin": false, 00:13:39.805 "nvme_io": false 00:13:39.805 }, 00:13:39.805 "memory_domains": [ 00:13:39.805 { 00:13:39.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.805 "dma_device_type": 2 00:13:39.805 } 00:13:39.805 ], 00:13:39.805 "driver_specific": {} 00:13:39.805 } 00:13:39.805 ] 00:13:39.805 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.805 10:39:06 -- common/autotest_common.sh@895 -- # return 0 00:13:39.805 10:39:06 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:39.805 10:39:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.805 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:39.805 10:39:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.805 10:39:06 -- bdev/blockdev.sh@513 -- # NOT wait 122363 00:13:39.805 10:39:06 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:39.805 10:39:06 -- common/autotest_common.sh@640 -- # local es=0 00:13:39.805 10:39:06 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 122363 00:13:39.805 10:39:06 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:39.805 10:39:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:39.805 10:39:06 -- common/autotest_common.sh@632 -- # type -t wait 00:13:39.805 10:39:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:39.805 10:39:06 -- common/autotest_common.sh@643 -- # wait 122363 00:13:39.805 Running I/O for 5 seconds... 00:13:39.805 task offset: 79592 on job bdev=EE_Dev_1 fails 00:13:39.805 00:13:39.805 Latency(us) 00:13:39.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.805 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:39.805 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:39.805 EE_Dev_1 : 0.00 22540.98 88.05 5122.95 0.00 460.46 209.45 852.71 00:13:39.805 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:39.805 Dev_2 : 0.00 16797.90 65.62 0.00 0.00 622.95 161.05 1131.99 00:13:39.805 =================================================================================================================== 00:13:39.805 Total : 39338.88 153.67 5122.95 0.00 548.59 161.05 1131.99 00:13:39.805 [2024-07-24 10:39:06.419835] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:39.805 request: 00:13:39.805 { 00:13:39.805 "method": "perform_tests", 00:13:39.805 "req_id": 1 00:13:39.805 } 00:13:39.805 Got JSON-RPC error response 00:13:39.805 response: 00:13:39.805 { 00:13:39.805 "code": -32603, 00:13:39.805 "message": "bdevperf failed with error Operation not permitted" 00:13:39.805 } 00:13:40.371 10:39:06 -- common/autotest_common.sh@643 -- # es=255 00:13:40.371 10:39:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:40.371 10:39:06 -- common/autotest_common.sh@652 -- # es=127 00:13:40.371 10:39:06 -- common/autotest_common.sh@653 -- # case "$es" in 00:13:40.371 10:39:06 -- common/autotest_common.sh@660 -- # es=1 00:13:40.371 10:39:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:40.371 00:13:40.371 real 0m9.453s 00:13:40.371 user 0m9.630s 00:13:40.371 sys 0m0.868s 00:13:40.371 10:39:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.371 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:40.371 ************************************ 00:13:40.371 END TEST bdev_error 00:13:40.371 ************************************ 00:13:40.371 10:39:06 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:40.371 10:39:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:40.371 10:39:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:40.371 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:40.371 ************************************ 00:13:40.371 START TEST bdev_stat 00:13:40.371 ************************************ 00:13:40.371 10:39:06 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:13:40.371 10:39:06 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:40.371 10:39:06 -- bdev/blockdev.sh@594 -- # STAT_PID=122410 00:13:40.371 10:39:06 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:40.371 10:39:06 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 122410' 00:13:40.371 Process Bdev IO statistics testing pid: 122410 00:13:40.371 10:39:06 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:40.371 10:39:06 -- bdev/blockdev.sh@597 -- # waitforlisten 122410 00:13:40.371 10:39:06 -- common/autotest_common.sh@819 -- # '[' -z 122410 ']' 00:13:40.371 10:39:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.371 10:39:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:40.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.371 10:39:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.371 10:39:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:40.371 10:39:06 -- common/autotest_common.sh@10 -- # set +x 00:13:40.371 [2024-07-24 10:39:07.030063] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:40.371 [2024-07-24 10:39:07.030904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122410 ] 00:13:40.629 [2024-07-24 10:39:07.189395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.629 [2024-07-24 10:39:07.278043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.629 [2024-07-24 10:39:07.278066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.570 10:39:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:41.570 10:39:07 -- common/autotest_common.sh@852 -- # return 0 00:13:41.570 10:39:07 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:41.570 10:39:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.570 10:39:07 -- common/autotest_common.sh@10 -- # set +x 00:13:41.570 Malloc_STAT 00:13:41.570 10:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.570 10:39:08 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:41.570 10:39:08 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:13:41.570 10:39:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:41.570 10:39:08 -- common/autotest_common.sh@889 -- # local i 00:13:41.570 10:39:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:41.570 10:39:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:41.570 10:39:08 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:41.570 10:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.570 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:41.570 10:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.570 10:39:08 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:41.570 10:39:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.570 10:39:08 -- common/autotest_common.sh@10 -- # set +x 00:13:41.570 [ 00:13:41.570 { 00:13:41.570 "name": "Malloc_STAT", 00:13:41.570 "aliases": [ 00:13:41.570 "b1d1ca28-e228-4320-9eb1-4fc5dfea2fe0" 00:13:41.570 ], 00:13:41.570 "product_name": "Malloc disk", 00:13:41.570 "block_size": 512, 00:13:41.570 "num_blocks": 262144, 00:13:41.570 "uuid": "b1d1ca28-e228-4320-9eb1-4fc5dfea2fe0", 00:13:41.570 "assigned_rate_limits": { 00:13:41.570 "rw_ios_per_sec": 0, 00:13:41.570 "rw_mbytes_per_sec": 0, 00:13:41.570 "r_mbytes_per_sec": 0, 00:13:41.570 "w_mbytes_per_sec": 0 00:13:41.570 }, 00:13:41.570 "claimed": false, 00:13:41.570 "zoned": false, 00:13:41.570 "supported_io_types": { 00:13:41.570 "read": true, 00:13:41.570 "write": true, 00:13:41.570 "unmap": true, 00:13:41.570 "write_zeroes": true, 00:13:41.570 "flush": true, 00:13:41.570 "reset": true, 00:13:41.570 "compare": false, 00:13:41.570 "compare_and_write": false, 00:13:41.570 "abort": true, 00:13:41.570 "nvme_admin": false, 00:13:41.570 "nvme_io": false 00:13:41.570 }, 00:13:41.570 "memory_domains": [ 00:13:41.570 { 00:13:41.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.570 "dma_device_type": 2 00:13:41.570 } 00:13:41.570 ], 00:13:41.570 "driver_specific": {} 00:13:41.570 } 00:13:41.570 ] 00:13:41.570 10:39:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.570 10:39:08 -- common/autotest_common.sh@895 -- # return 0 00:13:41.570 10:39:08 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:41.570 10:39:08 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.570 Running I/O for 10 seconds... 00:13:43.473 10:39:10 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:43.473 10:39:10 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:43.473 10:39:10 -- bdev/blockdev.sh@558 -- # local iostats 00:13:43.473 10:39:10 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:43.473 10:39:10 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:43.473 10:39:10 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:43.473 10:39:10 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:43.473 10:39:10 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:43.473 10:39:10 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:43.473 10:39:10 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:43.473 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.473 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:43.473 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.473 10:39:10 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:43.473 "tick_rate": 2200000000, 00:13:43.473 "ticks": 1649543929966, 00:13:43.473 "bdevs": [ 00:13:43.473 { 00:13:43.473 "name": "Malloc_STAT", 00:13:43.473 "bytes_read": 475042304, 00:13:43.473 "num_read_ops": 115971, 00:13:43.473 "bytes_written": 0, 00:13:43.473 "num_write_ops": 0, 00:13:43.473 "bytes_unmapped": 0, 00:13:43.473 "num_unmap_ops": 0, 00:13:43.473 "bytes_copied": 0, 00:13:43.473 "num_copy_ops": 0, 00:13:43.473 "read_latency_ticks": 2159862249279, 00:13:43.473 "max_read_latency_ticks": 24304616, 00:13:43.473 "min_read_latency_ticks": 470667, 00:13:43.473 "write_latency_ticks": 0, 00:13:43.473 "max_write_latency_ticks": 0, 00:13:43.473 "min_write_latency_ticks": 0, 00:13:43.473 "unmap_latency_ticks": 0, 00:13:43.473 "max_unmap_latency_ticks": 0, 00:13:43.473 "min_unmap_latency_ticks": 0, 00:13:43.473 "copy_latency_ticks": 0, 00:13:43.473 "max_copy_latency_ticks": 0, 00:13:43.473 "min_copy_latency_ticks": 0, 00:13:43.473 "io_error": {} 00:13:43.473 } 00:13:43.473 ] 00:13:43.473 }' 00:13:43.473 10:39:10 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@567 -- # io_count1=115971 00:13:43.732 10:39:10 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:43.732 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.732 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:43.732 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.732 10:39:10 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:43.732 "tick_rate": 2200000000, 00:13:43.732 "ticks": 1649709635609, 00:13:43.732 "name": "Malloc_STAT", 00:13:43.732 "channels": [ 00:13:43.732 { 00:13:43.732 "thread_id": 2, 00:13:43.732 "bytes_read": 240123904, 00:13:43.732 "num_read_ops": 58624, 00:13:43.732 "bytes_written": 0, 00:13:43.732 "num_write_ops": 0, 00:13:43.732 "bytes_unmapped": 0, 00:13:43.732 "num_unmap_ops": 0, 00:13:43.732 "bytes_copied": 0, 00:13:43.732 "num_copy_ops": 0, 00:13:43.732 "read_latency_ticks": 1121852411676, 00:13:43.732 "max_read_latency_ticks": 24304616, 00:13:43.732 "min_read_latency_ticks": 11181830, 00:13:43.732 "write_latency_ticks": 0, 00:13:43.732 "max_write_latency_ticks": 0, 00:13:43.732 "min_write_latency_ticks": 0, 00:13:43.732 "unmap_latency_ticks": 0, 00:13:43.732 "max_unmap_latency_ticks": 0, 00:13:43.732 "min_unmap_latency_ticks": 0, 00:13:43.732 "copy_latency_ticks": 0, 00:13:43.732 "max_copy_latency_ticks": 0, 00:13:43.732 "min_copy_latency_ticks": 0 00:13:43.732 }, 00:13:43.732 { 00:13:43.732 "thread_id": 3, 00:13:43.732 "bytes_read": 254803968, 00:13:43.732 "num_read_ops": 62208, 00:13:43.732 "bytes_written": 0, 00:13:43.732 "num_write_ops": 0, 00:13:43.732 "bytes_unmapped": 0, 00:13:43.732 "num_unmap_ops": 0, 00:13:43.732 "bytes_copied": 0, 00:13:43.732 "num_copy_ops": 0, 00:13:43.732 "read_latency_ticks": 1123912024070, 00:13:43.732 "max_read_latency_ticks": 22926536, 00:13:43.732 "min_read_latency_ticks": 9530254, 00:13:43.732 "write_latency_ticks": 0, 00:13:43.732 "max_write_latency_ticks": 0, 00:13:43.732 "min_write_latency_ticks": 0, 00:13:43.732 "unmap_latency_ticks": 0, 00:13:43.732 "max_unmap_latency_ticks": 0, 00:13:43.732 "min_unmap_latency_ticks": 0, 00:13:43.732 "copy_latency_ticks": 0, 00:13:43.732 "max_copy_latency_ticks": 0, 00:13:43.732 "min_copy_latency_ticks": 0 00:13:43.732 } 00:13:43.732 ] 00:13:43.732 }' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=58624 00:13:43.732 10:39:10 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=58624 00:13:43.732 10:39:10 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=62208 00:13:43.732 10:39:10 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=120832 00:13:43.732 10:39:10 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:43.732 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.732 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:43.732 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.732 10:39:10 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:43.732 "tick_rate": 2200000000, 00:13:43.732 "ticks": 1649987398428, 00:13:43.732 "bdevs": [ 00:13:43.732 { 00:13:43.732 "name": "Malloc_STAT", 00:13:43.732 "bytes_read": 527471104, 00:13:43.732 "num_read_ops": 128771, 00:13:43.732 "bytes_written": 0, 00:13:43.732 "num_write_ops": 0, 00:13:43.732 "bytes_unmapped": 0, 00:13:43.732 "num_unmap_ops": 0, 00:13:43.732 "bytes_copied": 0, 00:13:43.732 "num_copy_ops": 0, 00:13:43.732 "read_latency_ticks": 2388053048554, 00:13:43.732 "max_read_latency_ticks": 24304616, 00:13:43.732 "min_read_latency_ticks": 470667, 00:13:43.732 "write_latency_ticks": 0, 00:13:43.732 "max_write_latency_ticks": 0, 00:13:43.732 "min_write_latency_ticks": 0, 00:13:43.732 "unmap_latency_ticks": 0, 00:13:43.732 "max_unmap_latency_ticks": 0, 00:13:43.732 "min_unmap_latency_ticks": 0, 00:13:43.732 "copy_latency_ticks": 0, 00:13:43.732 "max_copy_latency_ticks": 0, 00:13:43.732 "min_copy_latency_ticks": 0, 00:13:43.732 "io_error": {} 00:13:43.732 } 00:13:43.732 ] 00:13:43.732 }' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@576 -- # io_count2=128771 00:13:43.732 10:39:10 -- bdev/blockdev.sh@581 -- # '[' 120832 -lt 115971 ']' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@581 -- # '[' 120832 -gt 128771 ']' 00:13:43.732 10:39:10 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:43.732 10:39:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.732 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:43.732 00:13:43.732 Latency(us) 00:13:43.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.732 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:43.732 Malloc_STAT : 2.20 29504.32 115.25 0.00 0.00 8650.53 1630.95 11081.54 00:13:43.732 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:43.732 Malloc_STAT : 2.21 31331.01 122.39 0.00 0.00 8146.93 934.63 10426.18 00:13:43.732 =================================================================================================================== 00:13:43.732 Total : 60835.33 237.64 0.00 0.00 8391.04 934.63 11081.54 00:13:43.732 0 00:13:43.732 10:39:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.732 10:39:10 -- bdev/blockdev.sh@607 -- # killprocess 122410 00:13:43.732 10:39:10 -- common/autotest_common.sh@926 -- # '[' -z 122410 ']' 00:13:43.732 10:39:10 -- common/autotest_common.sh@930 -- # kill -0 122410 00:13:43.732 10:39:10 -- common/autotest_common.sh@931 -- # uname 00:13:43.990 10:39:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:43.990 10:39:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122410 00:13:43.990 10:39:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:43.990 killing process with pid 122410 00:13:43.990 10:39:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:43.990 10:39:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122410' 00:13:43.990 Received shutdown signal, test time was about 2.264041 seconds 00:13:43.990 00:13:43.990 Latency(us) 00:13:43.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.990 =================================================================================================================== 00:13:43.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.990 10:39:10 -- common/autotest_common.sh@945 -- # kill 122410 00:13:43.990 10:39:10 -- common/autotest_common.sh@950 -- # wait 122410 00:13:44.248 ************************************ 00:13:44.248 END TEST bdev_stat 00:13:44.248 ************************************ 00:13:44.248 10:39:10 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:44.248 00:13:44.248 real 0m3.763s 00:13:44.248 user 0m7.445s 00:13:44.248 sys 0m0.388s 00:13:44.248 10:39:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.248 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:44.248 10:39:10 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:44.248 10:39:10 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:44.248 10:39:10 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:44.248 10:39:10 -- bdev/blockdev.sh@809 -- # cleanup 00:13:44.248 10:39:10 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:44.248 10:39:10 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:44.248 10:39:10 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:44.248 10:39:10 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:44.248 10:39:10 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:44.248 10:39:10 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:44.248 00:13:44.248 real 1m57.199s 00:13:44.248 user 5m14.984s 00:13:44.248 sys 0m21.206s 00:13:44.248 10:39:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:44.248 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:44.248 ************************************ 00:13:44.248 END TEST blockdev_general 00:13:44.248 ************************************ 00:13:44.249 10:39:10 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:44.249 10:39:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:44.249 10:39:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.249 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:44.249 ************************************ 00:13:44.249 START TEST bdev_raid 00:13:44.249 ************************************ 00:13:44.249 10:39:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:44.249 * Looking for test storage... 00:13:44.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:44.249 10:39:10 -- bdev/nbd_common.sh@6 -- # set -e 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:44.249 10:39:10 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:44.507 10:39:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:44.507 10:39:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:44.507 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:44.507 ************************************ 00:13:44.507 START TEST raid_function_test_raid0 00:13:44.507 ************************************ 00:13:44.507 10:39:10 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@86 -- # raid_pid=122558 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122558' 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:44.507 Process raid pid: 122558 00:13:44.507 10:39:10 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122558 /var/tmp/spdk-raid.sock 00:13:44.507 10:39:10 -- common/autotest_common.sh@819 -- # '[' -z 122558 ']' 00:13:44.507 10:39:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.507 10:39:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:44.507 10:39:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.507 10:39:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:44.507 10:39:10 -- common/autotest_common.sh@10 -- # set +x 00:13:44.507 [2024-07-24 10:39:11.005422] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:44.507 [2024-07-24 10:39:11.006713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.507 [2024-07-24 10:39:11.157717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.765 [2024-07-24 10:39:11.249927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.765 [2024-07-24 10:39:11.306666] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.333 10:39:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:45.333 10:39:11 -- common/autotest_common.sh@852 -- # return 0 00:13:45.333 10:39:11 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:45.333 10:39:11 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:45.333 10:39:11 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:45.333 10:39:11 -- bdev/bdev_raid.sh@70 -- # cat 00:13:45.333 10:39:11 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:45.900 [2024-07-24 10:39:12.298121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:45.900 [2024-07-24 10:39:12.301365] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:45.900 [2024-07-24 10:39:12.301618] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:13:45.900 [2024-07-24 10:39:12.301781] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:45.900 [2024-07-24 10:39:12.302160] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:13:45.900 [2024-07-24 10:39:12.302707] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:13:45.900 [2024-07-24 10:39:12.302832] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:13:45.900 [2024-07-24 10:39:12.303175] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.900 Base_1 00:13:45.900 Base_2 00:13:45.900 10:39:12 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:45.900 10:39:12 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:45.900 10:39:12 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:46.158 10:39:12 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:46.158 10:39:12 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:46.158 10:39:12 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@12 -- # local i 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.158 10:39:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:46.158 [2024-07-24 10:39:12.811439] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:13:46.158 /dev/nbd0 00:13:46.417 10:39:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:46.417 10:39:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:46.417 10:39:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:46.417 10:39:12 -- common/autotest_common.sh@857 -- # local i 00:13:46.417 10:39:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:46.417 10:39:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:46.417 10:39:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:46.417 10:39:12 -- common/autotest_common.sh@861 -- # break 00:13:46.417 10:39:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:46.417 10:39:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:46.417 10:39:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:46.417 1+0 records in 00:13:46.417 1+0 records out 00:13:46.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064211 s, 6.4 MB/s 00:13:46.417 10:39:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.417 10:39:12 -- common/autotest_common.sh@874 -- # size=4096 00:13:46.417 10:39:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:46.417 10:39:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:46.417 10:39:12 -- common/autotest_common.sh@877 -- # return 0 00:13:46.417 10:39:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:46.417 10:39:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:46.417 10:39:12 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:46.417 10:39:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:46.417 10:39:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:46.676 { 00:13:46.676 "nbd_device": "/dev/nbd0", 00:13:46.676 "bdev_name": "raid" 00:13:46.676 } 00:13:46.676 ]' 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:46.676 { 00:13:46.676 "nbd_device": "/dev/nbd0", 00:13:46.676 "bdev_name": "raid" 00:13:46.676 } 00:13:46.676 ]' 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@65 -- # count=1 00:13:46.676 10:39:13 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:46.676 4096+0 records in 00:13:46.676 4096+0 records out 00:13:46.676 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0206041 s, 102 MB/s 00:13:46.676 10:39:13 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:46.935 4096+0 records in 00:13:46.935 4096+0 records out 00:13:46.935 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.336187 s, 6.2 MB/s 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:46.935 128+0 records in 00:13:46.935 128+0 records out 00:13:46.935 65536 bytes (66 kB, 64 KiB) copied, 0.0011062 s, 59.2 MB/s 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:46.935 2035+0 records in 00:13:46.935 2035+0 records out 00:13:46.935 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00726717 s, 143 MB/s 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:46.935 456+0 records in 00:13:46.935 456+0 records out 00:13:46.935 233472 bytes (233 kB, 228 KiB) copied, 0.00221525 s, 105 MB/s 00:13:46.935 10:39:13 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:47.193 10:39:13 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:47.193 10:39:13 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:47.193 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:47.193 10:39:13 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:47.193 10:39:13 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:47.193 10:39:13 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:47.193 10:39:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:47.193 10:39:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:47.193 10:39:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.193 10:39:13 -- bdev/nbd_common.sh@51 -- # local i 00:13:47.193 10:39:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.193 10:39:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:47.451 [2024-07-24 10:39:13.913643] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@41 -- # break 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.451 10:39:13 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:47.451 10:39:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@65 -- # true 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@65 -- # count=0 00:13:47.710 10:39:14 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:47.710 10:39:14 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:47.710 10:39:14 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:47.710 10:39:14 -- bdev/bdev_raid.sh@111 -- # killprocess 122558 00:13:47.710 10:39:14 -- common/autotest_common.sh@926 -- # '[' -z 122558 ']' 00:13:47.710 10:39:14 -- common/autotest_common.sh@930 -- # kill -0 122558 00:13:47.710 10:39:14 -- common/autotest_common.sh@931 -- # uname 00:13:47.710 10:39:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:47.710 10:39:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122558 00:13:47.710 killing process with pid 122558 00:13:47.710 10:39:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:47.710 10:39:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:47.710 10:39:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122558' 00:13:47.710 10:39:14 -- common/autotest_common.sh@945 -- # kill 122558 00:13:47.710 10:39:14 -- common/autotest_common.sh@950 -- # wait 122558 00:13:47.710 [2024-07-24 10:39:14.244358] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.710 [2024-07-24 10:39:14.244534] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.710 [2024-07-24 10:39:14.244732] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.710 [2024-07-24 10:39:14.244930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:13:47.710 [2024-07-24 10:39:14.271445] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.968 ************************************ 00:13:47.968 END TEST raid_function_test_raid0 00:13:47.968 ************************************ 00:13:47.968 10:39:14 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:47.968 00:13:47.968 real 0m3.653s 00:13:47.968 user 0m5.027s 00:13:47.968 sys 0m0.926s 00:13:47.968 10:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.968 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:13:47.968 10:39:14 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:47.968 10:39:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:47.968 10:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.968 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.227 ************************************ 00:13:48.227 START TEST raid_function_test_concat 00:13:48.227 ************************************ 00:13:48.227 10:39:14 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@86 -- # raid_pid=122700 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 122700' 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:48.227 Process raid pid: 122700 00:13:48.227 10:39:14 -- bdev/bdev_raid.sh@88 -- # waitforlisten 122700 /var/tmp/spdk-raid.sock 00:13:48.227 10:39:14 -- common/autotest_common.sh@819 -- # '[' -z 122700 ']' 00:13:48.227 10:39:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:48.227 10:39:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:48.227 10:39:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:48.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:48.227 10:39:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:48.227 10:39:14 -- common/autotest_common.sh@10 -- # set +x 00:13:48.227 [2024-07-24 10:39:14.711528] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:48.227 [2024-07-24 10:39:14.711952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.227 [2024-07-24 10:39:14.858697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.486 [2024-07-24 10:39:14.946204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.486 [2024-07-24 10:39:15.021117] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.051 10:39:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:49.051 10:39:15 -- common/autotest_common.sh@852 -- # return 0 00:13:49.051 10:39:15 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:49.051 10:39:15 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:49.051 10:39:15 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:49.051 10:39:15 -- bdev/bdev_raid.sh@70 -- # cat 00:13:49.051 10:39:15 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:49.619 [2024-07-24 10:39:16.031822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:49.619 [2024-07-24 10:39:16.034505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:49.619 [2024-07-24 10:39:16.034718] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:13:49.619 [2024-07-24 10:39:16.034864] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:49.619 [2024-07-24 10:39:16.035096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:13:49.619 [2024-07-24 10:39:16.035668] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:13:49.619 [2024-07-24 10:39:16.035827] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006080 00:13:49.619 [2024-07-24 10:39:16.036213] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.619 Base_1 00:13:49.619 Base_2 00:13:49.619 10:39:16 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:49.619 10:39:16 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:49.619 10:39:16 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:49.878 10:39:16 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:49.878 10:39:16 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:49.878 10:39:16 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@12 -- # local i 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:49.878 10:39:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:50.138 [2024-07-24 10:39:16.604526] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:13:50.138 /dev/nbd0 00:13:50.138 10:39:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:50.138 10:39:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:50.138 10:39:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:50.139 10:39:16 -- common/autotest_common.sh@857 -- # local i 00:13:50.139 10:39:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:50.139 10:39:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:50.139 10:39:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:50.139 10:39:16 -- common/autotest_common.sh@861 -- # break 00:13:50.139 10:39:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:50.139 10:39:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:50.139 10:39:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:50.139 1+0 records in 00:13:50.139 1+0 records out 00:13:50.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361896 s, 11.3 MB/s 00:13:50.139 10:39:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.139 10:39:16 -- common/autotest_common.sh@874 -- # size=4096 00:13:50.139 10:39:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:50.139 10:39:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:50.139 10:39:16 -- common/autotest_common.sh@877 -- # return 0 00:13:50.139 10:39:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:50.139 10:39:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:50.139 10:39:16 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:50.139 10:39:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:50.139 10:39:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:50.397 { 00:13:50.397 "nbd_device": "/dev/nbd0", 00:13:50.397 "bdev_name": "raid" 00:13:50.397 } 00:13:50.397 ]' 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:50.397 { 00:13:50.397 "nbd_device": "/dev/nbd0", 00:13:50.397 "bdev_name": "raid" 00:13:50.397 } 00:13:50.397 ]' 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@65 -- # count=1 00:13:50.397 10:39:16 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:50.397 4096+0 records in 00:13:50.397 4096+0 records out 00:13:50.397 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292952 s, 71.6 MB/s 00:13:50.397 10:39:16 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:50.655 4096+0 records in 00:13:50.655 4096+0 records out 00:13:50.655 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.314175 s, 6.7 MB/s 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:50.655 128+0 records in 00:13:50.655 128+0 records out 00:13:50.655 65536 bytes (66 kB, 64 KiB) copied, 0.00109345 s, 59.9 MB/s 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:50.655 10:39:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:50.913 2035+0 records in 00:13:50.913 2035+0 records out 00:13:50.913 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00765905 s, 136 MB/s 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:50.913 456+0 records in 00:13:50.913 456+0 records out 00:13:50.913 233472 bytes (233 kB, 228 KiB) copied, 0.00148358 s, 157 MB/s 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:50.913 10:39:17 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:50.914 10:39:17 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:50.914 10:39:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:50.914 10:39:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:50.914 10:39:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:50.914 10:39:17 -- bdev/nbd_common.sh@51 -- # local i 00:13:50.914 10:39:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:50.914 10:39:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:51.172 [2024-07-24 10:39:17.721645] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@41 -- # break 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@45 -- # return 0 00:13:51.172 10:39:17 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:51.172 10:39:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:51.431 10:39:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:51.431 10:39:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:51.431 10:39:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:51.431 10:39:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:51.431 10:39:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:51.431 10:39:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:51.431 10:39:18 -- bdev/nbd_common.sh@65 -- # true 00:13:51.431 10:39:18 -- bdev/nbd_common.sh@65 -- # count=0 00:13:51.431 10:39:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:51.431 10:39:18 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:51.431 10:39:18 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:51.431 10:39:18 -- bdev/bdev_raid.sh@111 -- # killprocess 122700 00:13:51.431 10:39:18 -- common/autotest_common.sh@926 -- # '[' -z 122700 ']' 00:13:51.431 10:39:18 -- common/autotest_common.sh@930 -- # kill -0 122700 00:13:51.431 10:39:18 -- common/autotest_common.sh@931 -- # uname 00:13:51.431 10:39:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:51.431 10:39:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122700 00:13:51.431 killing process with pid 122700 00:13:51.431 10:39:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:51.431 10:39:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:51.431 10:39:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122700' 00:13:51.431 10:39:18 -- common/autotest_common.sh@945 -- # kill 122700 00:13:51.431 10:39:18 -- common/autotest_common.sh@950 -- # wait 122700 00:13:51.431 [2024-07-24 10:39:18.038289] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:51.431 [2024-07-24 10:39:18.038510] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.431 [2024-07-24 10:39:18.038597] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.431 [2024-07-24 10:39:18.038621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name raid, state offline 00:13:51.431 [2024-07-24 10:39:18.067426] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:51.998 00:13:51.998 real 0m3.722s 00:13:51.998 user 0m5.107s 00:13:51.998 ************************************ 00:13:51.998 END TEST raid_function_test_concat 00:13:51.998 ************************************ 00:13:51.998 sys 0m1.013s 00:13:51.998 10:39:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.998 10:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:13:51.998 10:39:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:51.998 10:39:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.998 10:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:51.998 ************************************ 00:13:51.998 START TEST raid0_resize_test 00:13:51.998 ************************************ 00:13:51.998 10:39:18 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@301 -- # raid_pid=122848 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 122848' 00:13:51.998 Process raid pid: 122848 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:51.998 10:39:18 -- bdev/bdev_raid.sh@303 -- # waitforlisten 122848 /var/tmp/spdk-raid.sock 00:13:51.998 10:39:18 -- common/autotest_common.sh@819 -- # '[' -z 122848 ']' 00:13:51.998 10:39:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:51.998 10:39:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.998 10:39:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:51.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:51.998 10:39:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.998 10:39:18 -- common/autotest_common.sh@10 -- # set +x 00:13:51.998 [2024-07-24 10:39:18.483514] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:51.998 [2024-07-24 10:39:18.484208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.998 [2024-07-24 10:39:18.627037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.257 [2024-07-24 10:39:18.738599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.257 [2024-07-24 10:39:18.813642] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.835 10:39:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.835 10:39:19 -- common/autotest_common.sh@852 -- # return 0 00:13:52.835 10:39:19 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:53.093 Base_1 00:13:53.093 10:39:19 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:53.352 Base_2 00:13:53.352 10:39:19 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:53.610 [2024-07-24 10:39:20.132158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:53.610 [2024-07-24 10:39:20.134846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:53.610 [2024-07-24 10:39:20.135078] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:13:53.610 [2024-07-24 10:39:20.135263] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:53.610 [2024-07-24 10:39:20.135656] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001de0 00:13:53.610 [2024-07-24 10:39:20.136317] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:13:53.610 [2024-07-24 10:39:20.136473] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006080 00:13:53.610 [2024-07-24 10:39:20.136910] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.610 10:39:20 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:53.869 [2024-07-24 10:39:20.352905] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:53.869 [2024-07-24 10:39:20.353148] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:53.869 true 00:13:53.869 10:39:20 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:53.869 10:39:20 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:13:54.191 [2024-07-24 10:39:20.625197] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.191 10:39:20 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:13:54.191 10:39:20 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:13:54.191 10:39:20 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:13:54.191 10:39:20 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:54.191 [2024-07-24 10:39:20.849013] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:54.191 [2024-07-24 10:39:20.849282] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:54.191 [2024-07-24 10:39:20.849564] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:13:54.191 [2024-07-24 10:39:20.849779] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:54.191 true 00:13:54.450 10:39:20 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:54.450 10:39:20 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:13:54.450 [2024-07-24 10:39:21.101303] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.450 10:39:21 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:13:54.450 10:39:21 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:13:54.450 10:39:21 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:13:54.450 10:39:21 -- bdev/bdev_raid.sh@332 -- # killprocess 122848 00:13:54.450 10:39:21 -- common/autotest_common.sh@926 -- # '[' -z 122848 ']' 00:13:54.450 10:39:21 -- common/autotest_common.sh@930 -- # kill -0 122848 00:13:54.450 10:39:21 -- common/autotest_common.sh@931 -- # uname 00:13:54.450 10:39:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:54.450 10:39:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122848 00:13:54.736 killing process with pid 122848 00:13:54.736 10:39:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:54.736 10:39:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:54.736 10:39:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122848' 00:13:54.736 10:39:21 -- common/autotest_common.sh@945 -- # kill 122848 00:13:54.736 [2024-07-24 10:39:21.142356] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:54.736 10:39:21 -- common/autotest_common.sh@950 -- # wait 122848 00:13:54.736 [2024-07-24 10:39:21.142510] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.736 [2024-07-24 10:39:21.142620] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.736 [2024-07-24 10:39:21.142647] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Raid, state offline 00:13:54.736 [2024-07-24 10:39:21.143344] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@334 -- # return 0 00:13:54.994 00:13:54.994 real 0m3.021s 00:13:54.994 user 0m4.639s 00:13:54.994 ************************************ 00:13:54.994 END TEST raid0_resize_test 00:13:54.994 ************************************ 00:13:54.994 sys 0m0.529s 00:13:54.994 10:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:54.994 10:39:21 -- common/autotest_common.sh@10 -- # set +x 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:54.994 10:39:21 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:54.994 10:39:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:54.994 10:39:21 -- common/autotest_common.sh@10 -- # set +x 00:13:54.994 ************************************ 00:13:54.994 START TEST raid_state_function_test 00:13:54.994 ************************************ 00:13:54.994 10:39:21 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:54.994 10:39:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=122932 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122932' 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:54.995 Process raid pid: 122932 00:13:54.995 10:39:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122932 /var/tmp/spdk-raid.sock 00:13:54.995 10:39:21 -- common/autotest_common.sh@819 -- # '[' -z 122932 ']' 00:13:54.995 10:39:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:54.995 10:39:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:54.995 10:39:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:54.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:54.995 10:39:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:54.995 10:39:21 -- common/autotest_common.sh@10 -- # set +x 00:13:54.995 [2024-07-24 10:39:21.572698] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:54.995 [2024-07-24 10:39:21.573134] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.253 [2024-07-24 10:39:21.722588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.253 [2024-07-24 10:39:21.831409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.253 [2024-07-24 10:39:21.909818] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.187 10:39:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:56.187 10:39:22 -- common/autotest_common.sh@852 -- # return 0 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:56.187 [2024-07-24 10:39:22.770014] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.187 [2024-07-24 10:39:22.770464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.187 [2024-07-24 10:39:22.770623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.187 [2024-07-24 10:39:22.770699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.187 10:39:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.446 10:39:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:56.446 "name": "Existed_Raid", 00:13:56.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.446 "strip_size_kb": 64, 00:13:56.446 "state": "configuring", 00:13:56.446 "raid_level": "raid0", 00:13:56.446 "superblock": false, 00:13:56.446 "num_base_bdevs": 2, 00:13:56.446 "num_base_bdevs_discovered": 0, 00:13:56.446 "num_base_bdevs_operational": 2, 00:13:56.446 "base_bdevs_list": [ 00:13:56.446 { 00:13:56.446 "name": "BaseBdev1", 00:13:56.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.446 "is_configured": false, 00:13:56.446 "data_offset": 0, 00:13:56.446 "data_size": 0 00:13:56.446 }, 00:13:56.446 { 00:13:56.446 "name": "BaseBdev2", 00:13:56.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.446 "is_configured": false, 00:13:56.446 "data_offset": 0, 00:13:56.446 "data_size": 0 00:13:56.446 } 00:13:56.446 ] 00:13:56.446 }' 00:13:56.446 10:39:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:56.446 10:39:23 -- common/autotest_common.sh@10 -- # set +x 00:13:57.013 10:39:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:57.272 [2024-07-24 10:39:23.790051] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.272 [2024-07-24 10:39:23.790449] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:13:57.272 10:39:23 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:57.530 [2024-07-24 10:39:24.014063] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:57.530 [2024-07-24 10:39:24.014369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:57.530 [2024-07-24 10:39:24.014502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.530 [2024-07-24 10:39:24.014705] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.530 10:39:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:57.789 [2024-07-24 10:39:24.321987] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.789 BaseBdev1 00:13:57.789 10:39:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:57.789 10:39:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:57.789 10:39:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:57.789 10:39:24 -- common/autotest_common.sh@889 -- # local i 00:13:57.789 10:39:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:57.789 10:39:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:57.789 10:39:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:58.046 10:39:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:58.334 [ 00:13:58.334 { 00:13:58.334 "name": "BaseBdev1", 00:13:58.334 "aliases": [ 00:13:58.334 "f36b3113-5964-461d-935e-341c121928ad" 00:13:58.334 ], 00:13:58.334 "product_name": "Malloc disk", 00:13:58.334 "block_size": 512, 00:13:58.334 "num_blocks": 65536, 00:13:58.334 "uuid": "f36b3113-5964-461d-935e-341c121928ad", 00:13:58.334 "assigned_rate_limits": { 00:13:58.334 "rw_ios_per_sec": 0, 00:13:58.334 "rw_mbytes_per_sec": 0, 00:13:58.334 "r_mbytes_per_sec": 0, 00:13:58.334 "w_mbytes_per_sec": 0 00:13:58.334 }, 00:13:58.334 "claimed": true, 00:13:58.334 "claim_type": "exclusive_write", 00:13:58.334 "zoned": false, 00:13:58.334 "supported_io_types": { 00:13:58.334 "read": true, 00:13:58.334 "write": true, 00:13:58.334 "unmap": true, 00:13:58.334 "write_zeroes": true, 00:13:58.334 "flush": true, 00:13:58.334 "reset": true, 00:13:58.334 "compare": false, 00:13:58.334 "compare_and_write": false, 00:13:58.334 "abort": true, 00:13:58.334 "nvme_admin": false, 00:13:58.334 "nvme_io": false 00:13:58.334 }, 00:13:58.334 "memory_domains": [ 00:13:58.334 { 00:13:58.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.334 "dma_device_type": 2 00:13:58.334 } 00:13:58.334 ], 00:13:58.334 "driver_specific": {} 00:13:58.334 } 00:13:58.334 ] 00:13:58.334 10:39:24 -- common/autotest_common.sh@895 -- # return 0 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.334 10:39:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.597 10:39:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:58.597 "name": "Existed_Raid", 00:13:58.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.597 "strip_size_kb": 64, 00:13:58.597 "state": "configuring", 00:13:58.597 "raid_level": "raid0", 00:13:58.597 "superblock": false, 00:13:58.597 "num_base_bdevs": 2, 00:13:58.597 "num_base_bdevs_discovered": 1, 00:13:58.597 "num_base_bdevs_operational": 2, 00:13:58.597 "base_bdevs_list": [ 00:13:58.597 { 00:13:58.597 "name": "BaseBdev1", 00:13:58.597 "uuid": "f36b3113-5964-461d-935e-341c121928ad", 00:13:58.597 "is_configured": true, 00:13:58.597 "data_offset": 0, 00:13:58.597 "data_size": 65536 00:13:58.597 }, 00:13:58.597 { 00:13:58.597 "name": "BaseBdev2", 00:13:58.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.597 "is_configured": false, 00:13:58.597 "data_offset": 0, 00:13:58.597 "data_size": 0 00:13:58.597 } 00:13:58.597 ] 00:13:58.597 }' 00:13:58.597 10:39:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:58.597 10:39:25 -- common/autotest_common.sh@10 -- # set +x 00:13:59.165 10:39:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.424 [2024-07-24 10:39:26.010560] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.424 [2024-07-24 10:39:26.011031] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:13:59.424 10:39:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:59.424 10:39:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:59.682 [2024-07-24 10:39:26.290785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.682 [2024-07-24 10:39:26.293573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.682 [2024-07-24 10:39:26.293809] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.682 10:39:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.941 10:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:59.941 "name": "Existed_Raid", 00:13:59.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.941 "strip_size_kb": 64, 00:13:59.941 "state": "configuring", 00:13:59.941 "raid_level": "raid0", 00:13:59.941 "superblock": false, 00:13:59.941 "num_base_bdevs": 2, 00:13:59.941 "num_base_bdevs_discovered": 1, 00:13:59.941 "num_base_bdevs_operational": 2, 00:13:59.941 "base_bdevs_list": [ 00:13:59.941 { 00:13:59.941 "name": "BaseBdev1", 00:13:59.941 "uuid": "f36b3113-5964-461d-935e-341c121928ad", 00:13:59.941 "is_configured": true, 00:13:59.941 "data_offset": 0, 00:13:59.941 "data_size": 65536 00:13:59.941 }, 00:13:59.941 { 00:13:59.941 "name": "BaseBdev2", 00:13:59.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.941 "is_configured": false, 00:13:59.941 "data_offset": 0, 00:13:59.941 "data_size": 0 00:13:59.941 } 00:13:59.941 ] 00:13:59.941 }' 00:13:59.941 10:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:59.941 10:39:26 -- common/autotest_common.sh@10 -- # set +x 00:14:00.876 10:39:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:00.876 [2024-07-24 10:39:27.512955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.876 [2024-07-24 10:39:27.513381] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:00.876 [2024-07-24 10:39:27.513635] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:00.876 [2024-07-24 10:39:27.514100] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:00.876 [2024-07-24 10:39:27.515097] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:00.876 [2024-07-24 10:39:27.515304] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:00.876 [2024-07-24 10:39:27.516051] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.876 BaseBdev2 00:14:00.876 10:39:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:00.876 10:39:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:00.876 10:39:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:00.876 10:39:27 -- common/autotest_common.sh@889 -- # local i 00:14:00.876 10:39:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:00.876 10:39:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:00.876 10:39:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.135 10:39:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.394 [ 00:14:01.394 { 00:14:01.394 "name": "BaseBdev2", 00:14:01.394 "aliases": [ 00:14:01.394 "9a662994-9e10-4abf-a3d3-0765d100807f" 00:14:01.394 ], 00:14:01.394 "product_name": "Malloc disk", 00:14:01.394 "block_size": 512, 00:14:01.394 "num_blocks": 65536, 00:14:01.394 "uuid": "9a662994-9e10-4abf-a3d3-0765d100807f", 00:14:01.394 "assigned_rate_limits": { 00:14:01.394 "rw_ios_per_sec": 0, 00:14:01.394 "rw_mbytes_per_sec": 0, 00:14:01.394 "r_mbytes_per_sec": 0, 00:14:01.394 "w_mbytes_per_sec": 0 00:14:01.394 }, 00:14:01.394 "claimed": true, 00:14:01.394 "claim_type": "exclusive_write", 00:14:01.394 "zoned": false, 00:14:01.394 "supported_io_types": { 00:14:01.394 "read": true, 00:14:01.394 "write": true, 00:14:01.394 "unmap": true, 00:14:01.394 "write_zeroes": true, 00:14:01.394 "flush": true, 00:14:01.394 "reset": true, 00:14:01.394 "compare": false, 00:14:01.394 "compare_and_write": false, 00:14:01.394 "abort": true, 00:14:01.394 "nvme_admin": false, 00:14:01.394 "nvme_io": false 00:14:01.394 }, 00:14:01.394 "memory_domains": [ 00:14:01.394 { 00:14:01.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.394 "dma_device_type": 2 00:14:01.394 } 00:14:01.394 ], 00:14:01.394 "driver_specific": {} 00:14:01.394 } 00:14:01.394 ] 00:14:01.394 10:39:28 -- common/autotest_common.sh@895 -- # return 0 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.394 10:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.653 10:39:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:01.653 "name": "Existed_Raid", 00:14:01.653 "uuid": "fe0a344e-e857-46f2-8c6b-7d19537fd783", 00:14:01.653 "strip_size_kb": 64, 00:14:01.653 "state": "online", 00:14:01.653 "raid_level": "raid0", 00:14:01.653 "superblock": false, 00:14:01.653 "num_base_bdevs": 2, 00:14:01.653 "num_base_bdevs_discovered": 2, 00:14:01.653 "num_base_bdevs_operational": 2, 00:14:01.653 "base_bdevs_list": [ 00:14:01.653 { 00:14:01.653 "name": "BaseBdev1", 00:14:01.653 "uuid": "f36b3113-5964-461d-935e-341c121928ad", 00:14:01.653 "is_configured": true, 00:14:01.653 "data_offset": 0, 00:14:01.653 "data_size": 65536 00:14:01.653 }, 00:14:01.653 { 00:14:01.653 "name": "BaseBdev2", 00:14:01.653 "uuid": "9a662994-9e10-4abf-a3d3-0765d100807f", 00:14:01.653 "is_configured": true, 00:14:01.653 "data_offset": 0, 00:14:01.653 "data_size": 65536 00:14:01.653 } 00:14:01.653 ] 00:14:01.653 }' 00:14:01.653 10:39:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:01.653 10:39:28 -- common/autotest_common.sh@10 -- # set +x 00:14:02.220 10:39:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:02.488 [2024-07-24 10:39:29.069703] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.488 [2024-07-24 10:39:29.070066] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.488 [2024-07-24 10:39:29.070368] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.488 10:39:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.765 10:39:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:02.765 "name": "Existed_Raid", 00:14:02.765 "uuid": "fe0a344e-e857-46f2-8c6b-7d19537fd783", 00:14:02.765 "strip_size_kb": 64, 00:14:02.765 "state": "offline", 00:14:02.765 "raid_level": "raid0", 00:14:02.765 "superblock": false, 00:14:02.765 "num_base_bdevs": 2, 00:14:02.765 "num_base_bdevs_discovered": 1, 00:14:02.765 "num_base_bdevs_operational": 1, 00:14:02.765 "base_bdevs_list": [ 00:14:02.765 { 00:14:02.765 "name": null, 00:14:02.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.765 "is_configured": false, 00:14:02.765 "data_offset": 0, 00:14:02.765 "data_size": 65536 00:14:02.765 }, 00:14:02.765 { 00:14:02.765 "name": "BaseBdev2", 00:14:02.765 "uuid": "9a662994-9e10-4abf-a3d3-0765d100807f", 00:14:02.765 "is_configured": true, 00:14:02.765 "data_offset": 0, 00:14:02.765 "data_size": 65536 00:14:02.765 } 00:14:02.765 ] 00:14:02.765 }' 00:14:02.765 10:39:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:02.765 10:39:29 -- common/autotest_common.sh@10 -- # set +x 00:14:03.331 10:39:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:03.331 10:39:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:03.331 10:39:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.331 10:39:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:03.590 10:39:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:03.590 10:39:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:03.590 10:39:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:03.848 [2024-07-24 10:39:30.529561] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:03.848 [2024-07-24 10:39:30.530093] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:04.107 10:39:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:04.107 10:39:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:04.107 10:39:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.107 10:39:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.365 10:39:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:04.365 10:39:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:04.365 10:39:30 -- bdev/bdev_raid.sh@287 -- # killprocess 122932 00:14:04.365 10:39:30 -- common/autotest_common.sh@926 -- # '[' -z 122932 ']' 00:14:04.365 10:39:30 -- common/autotest_common.sh@930 -- # kill -0 122932 00:14:04.365 10:39:30 -- common/autotest_common.sh@931 -- # uname 00:14:04.365 10:39:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:04.365 10:39:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122932 00:14:04.365 killing process with pid 122932 00:14:04.365 10:39:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:04.365 10:39:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:04.366 10:39:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122932' 00:14:04.366 10:39:30 -- common/autotest_common.sh@945 -- # kill 122932 00:14:04.366 10:39:30 -- common/autotest_common.sh@950 -- # wait 122932 00:14:04.366 [2024-07-24 10:39:30.887054] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.366 [2024-07-24 10:39:30.887624] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.624 ************************************ 00:14:04.624 END TEST raid_state_function_test 00:14:04.624 ************************************ 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:04.624 00:14:04.624 real 0m9.713s 00:14:04.624 user 0m17.509s 00:14:04.624 sys 0m1.305s 00:14:04.624 10:39:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.624 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:04.624 10:39:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:04.624 10:39:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:04.624 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:14:04.624 ************************************ 00:14:04.624 START TEST raid_state_function_test_sb 00:14:04.624 ************************************ 00:14:04.624 10:39:31 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:04.624 10:39:31 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@226 -- # raid_pid=123253 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.625 Process raid pid: 123253 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123253' 00:14:04.625 10:39:31 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123253 /var/tmp/spdk-raid.sock 00:14:04.625 10:39:31 -- common/autotest_common.sh@819 -- # '[' -z 123253 ']' 00:14:04.625 10:39:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.625 10:39:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:04.625 10:39:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.625 10:39:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:04.625 10:39:31 -- common/autotest_common.sh@10 -- # set +x 00:14:04.883 [2024-07-24 10:39:31.339268] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:04.883 [2024-07-24 10:39:31.339776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.883 [2024-07-24 10:39:31.485551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.141 [2024-07-24 10:39:31.608150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.141 [2024-07-24 10:39:31.684769] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.709 10:39:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:05.709 10:39:32 -- common/autotest_common.sh@852 -- # return 0 00:14:05.709 10:39:32 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:05.967 [2024-07-24 10:39:32.556063] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:05.967 [2024-07-24 10:39:32.556363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:05.967 [2024-07-24 10:39:32.556484] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:05.967 [2024-07-24 10:39:32.556622] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.967 10:39:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.225 10:39:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.225 "name": "Existed_Raid", 00:14:06.225 "uuid": "e3577148-a201-431d-89c9-6397aff278ed", 00:14:06.225 "strip_size_kb": 64, 00:14:06.225 "state": "configuring", 00:14:06.225 "raid_level": "raid0", 00:14:06.225 "superblock": true, 00:14:06.225 "num_base_bdevs": 2, 00:14:06.225 "num_base_bdevs_discovered": 0, 00:14:06.225 "num_base_bdevs_operational": 2, 00:14:06.225 "base_bdevs_list": [ 00:14:06.225 { 00:14:06.225 "name": "BaseBdev1", 00:14:06.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.226 "is_configured": false, 00:14:06.226 "data_offset": 0, 00:14:06.226 "data_size": 0 00:14:06.226 }, 00:14:06.226 { 00:14:06.226 "name": "BaseBdev2", 00:14:06.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.226 "is_configured": false, 00:14:06.226 "data_offset": 0, 00:14:06.226 "data_size": 0 00:14:06.226 } 00:14:06.226 ] 00:14:06.226 }' 00:14:06.226 10:39:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.226 10:39:32 -- common/autotest_common.sh@10 -- # set +x 00:14:06.791 10:39:33 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:07.049 [2024-07-24 10:39:33.664676] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.049 [2024-07-24 10:39:33.665044] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:07.049 10:39:33 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:07.308 [2024-07-24 10:39:33.924823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.308 [2024-07-24 10:39:33.925312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.308 [2024-07-24 10:39:33.925454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.308 [2024-07-24 10:39:33.925535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.308 10:39:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:07.586 [2024-07-24 10:39:34.176855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:07.586 BaseBdev1 00:14:07.586 10:39:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:07.586 10:39:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:07.586 10:39:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:07.586 10:39:34 -- common/autotest_common.sh@889 -- # local i 00:14:07.586 10:39:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:07.586 10:39:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:07.586 10:39:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:07.845 10:39:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.103 [ 00:14:08.103 { 00:14:08.103 "name": "BaseBdev1", 00:14:08.103 "aliases": [ 00:14:08.103 "b227dbd0-6b4c-4e5b-81ee-b6ce288e3576" 00:14:08.103 ], 00:14:08.103 "product_name": "Malloc disk", 00:14:08.103 "block_size": 512, 00:14:08.103 "num_blocks": 65536, 00:14:08.103 "uuid": "b227dbd0-6b4c-4e5b-81ee-b6ce288e3576", 00:14:08.103 "assigned_rate_limits": { 00:14:08.103 "rw_ios_per_sec": 0, 00:14:08.103 "rw_mbytes_per_sec": 0, 00:14:08.103 "r_mbytes_per_sec": 0, 00:14:08.103 "w_mbytes_per_sec": 0 00:14:08.103 }, 00:14:08.103 "claimed": true, 00:14:08.103 "claim_type": "exclusive_write", 00:14:08.103 "zoned": false, 00:14:08.103 "supported_io_types": { 00:14:08.103 "read": true, 00:14:08.103 "write": true, 00:14:08.103 "unmap": true, 00:14:08.103 "write_zeroes": true, 00:14:08.103 "flush": true, 00:14:08.103 "reset": true, 00:14:08.103 "compare": false, 00:14:08.103 "compare_and_write": false, 00:14:08.103 "abort": true, 00:14:08.103 "nvme_admin": false, 00:14:08.103 "nvme_io": false 00:14:08.103 }, 00:14:08.103 "memory_domains": [ 00:14:08.103 { 00:14:08.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.103 "dma_device_type": 2 00:14:08.103 } 00:14:08.103 ], 00:14:08.103 "driver_specific": {} 00:14:08.103 } 00:14:08.103 ] 00:14:08.103 10:39:34 -- common/autotest_common.sh@895 -- # return 0 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.103 10:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.361 10:39:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.361 "name": "Existed_Raid", 00:14:08.361 "uuid": "a0c68b8b-a27b-4f42-bbea-b32fcbdfc3b9", 00:14:08.361 "strip_size_kb": 64, 00:14:08.361 "state": "configuring", 00:14:08.361 "raid_level": "raid0", 00:14:08.361 "superblock": true, 00:14:08.361 "num_base_bdevs": 2, 00:14:08.361 "num_base_bdevs_discovered": 1, 00:14:08.361 "num_base_bdevs_operational": 2, 00:14:08.361 "base_bdevs_list": [ 00:14:08.362 { 00:14:08.362 "name": "BaseBdev1", 00:14:08.362 "uuid": "b227dbd0-6b4c-4e5b-81ee-b6ce288e3576", 00:14:08.362 "is_configured": true, 00:14:08.362 "data_offset": 2048, 00:14:08.362 "data_size": 63488 00:14:08.362 }, 00:14:08.362 { 00:14:08.362 "name": "BaseBdev2", 00:14:08.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.362 "is_configured": false, 00:14:08.362 "data_offset": 0, 00:14:08.362 "data_size": 0 00:14:08.362 } 00:14:08.362 ] 00:14:08.362 }' 00:14:08.362 10:39:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.362 10:39:34 -- common/autotest_common.sh@10 -- # set +x 00:14:09.300 10:39:35 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.300 [2024-07-24 10:39:35.901470] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.300 [2024-07-24 10:39:35.901867] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:09.300 10:39:35 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:09.300 10:39:35 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:09.558 10:39:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.817 BaseBdev1 00:14:09.817 10:39:36 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:09.817 10:39:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:09.817 10:39:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:09.817 10:39:36 -- common/autotest_common.sh@889 -- # local i 00:14:09.817 10:39:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:09.817 10:39:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:09.817 10:39:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.383 10:39:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:10.383 [ 00:14:10.383 { 00:14:10.383 "name": "BaseBdev1", 00:14:10.383 "aliases": [ 00:14:10.383 "5f8bd292-7471-44ca-ba30-5e0ccbc56630" 00:14:10.383 ], 00:14:10.383 "product_name": "Malloc disk", 00:14:10.383 "block_size": 512, 00:14:10.383 "num_blocks": 65536, 00:14:10.383 "uuid": "5f8bd292-7471-44ca-ba30-5e0ccbc56630", 00:14:10.383 "assigned_rate_limits": { 00:14:10.383 "rw_ios_per_sec": 0, 00:14:10.383 "rw_mbytes_per_sec": 0, 00:14:10.383 "r_mbytes_per_sec": 0, 00:14:10.383 "w_mbytes_per_sec": 0 00:14:10.383 }, 00:14:10.383 "claimed": false, 00:14:10.383 "zoned": false, 00:14:10.383 "supported_io_types": { 00:14:10.383 "read": true, 00:14:10.383 "write": true, 00:14:10.383 "unmap": true, 00:14:10.383 "write_zeroes": true, 00:14:10.383 "flush": true, 00:14:10.383 "reset": true, 00:14:10.383 "compare": false, 00:14:10.383 "compare_and_write": false, 00:14:10.383 "abort": true, 00:14:10.383 "nvme_admin": false, 00:14:10.383 "nvme_io": false 00:14:10.383 }, 00:14:10.383 "memory_domains": [ 00:14:10.383 { 00:14:10.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.383 "dma_device_type": 2 00:14:10.383 } 00:14:10.383 ], 00:14:10.383 "driver_specific": {} 00:14:10.383 } 00:14:10.383 ] 00:14:10.383 10:39:37 -- common/autotest_common.sh@895 -- # return 0 00:14:10.383 10:39:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:10.641 [2024-07-24 10:39:37.265063] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.641 [2024-07-24 10:39:37.267782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.641 [2024-07-24 10:39:37.267995] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.641 10:39:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.899 10:39:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.899 "name": "Existed_Raid", 00:14:10.899 "uuid": "df748e19-bc76-46eb-a88b-954bcee7e342", 00:14:10.899 "strip_size_kb": 64, 00:14:10.899 "state": "configuring", 00:14:10.899 "raid_level": "raid0", 00:14:10.899 "superblock": true, 00:14:10.899 "num_base_bdevs": 2, 00:14:10.899 "num_base_bdevs_discovered": 1, 00:14:10.899 "num_base_bdevs_operational": 2, 00:14:10.899 "base_bdevs_list": [ 00:14:10.899 { 00:14:10.899 "name": "BaseBdev1", 00:14:10.899 "uuid": "5f8bd292-7471-44ca-ba30-5e0ccbc56630", 00:14:10.899 "is_configured": true, 00:14:10.899 "data_offset": 2048, 00:14:10.900 "data_size": 63488 00:14:10.900 }, 00:14:10.900 { 00:14:10.900 "name": "BaseBdev2", 00:14:10.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.900 "is_configured": false, 00:14:10.900 "data_offset": 0, 00:14:10.900 "data_size": 0 00:14:10.900 } 00:14:10.900 ] 00:14:10.900 }' 00:14:10.900 10:39:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.900 10:39:37 -- common/autotest_common.sh@10 -- # set +x 00:14:11.836 10:39:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.836 [2024-07-24 10:39:38.501164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.836 [2024-07-24 10:39:38.501891] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:14:11.836 [2024-07-24 10:39:38.502094] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:11.836 [2024-07-24 10:39:38.502483] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:11.836 BaseBdev2 00:14:11.836 [2024-07-24 10:39:38.503255] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:14:11.836 [2024-07-24 10:39:38.503442] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:14:11.836 [2024-07-24 10:39:38.503836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.095 10:39:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:12.095 10:39:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:12.095 10:39:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:12.095 10:39:38 -- common/autotest_common.sh@889 -- # local i 00:14:12.095 10:39:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:12.095 10:39:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:12.095 10:39:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.095 10:39:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.369 [ 00:14:12.369 { 00:14:12.369 "name": "BaseBdev2", 00:14:12.369 "aliases": [ 00:14:12.369 "78674469-8322-4d89-89de-dc4d0da1b911" 00:14:12.369 ], 00:14:12.369 "product_name": "Malloc disk", 00:14:12.369 "block_size": 512, 00:14:12.369 "num_blocks": 65536, 00:14:12.369 "uuid": "78674469-8322-4d89-89de-dc4d0da1b911", 00:14:12.369 "assigned_rate_limits": { 00:14:12.369 "rw_ios_per_sec": 0, 00:14:12.369 "rw_mbytes_per_sec": 0, 00:14:12.369 "r_mbytes_per_sec": 0, 00:14:12.369 "w_mbytes_per_sec": 0 00:14:12.369 }, 00:14:12.369 "claimed": true, 00:14:12.369 "claim_type": "exclusive_write", 00:14:12.369 "zoned": false, 00:14:12.369 "supported_io_types": { 00:14:12.369 "read": true, 00:14:12.369 "write": true, 00:14:12.369 "unmap": true, 00:14:12.369 "write_zeroes": true, 00:14:12.369 "flush": true, 00:14:12.369 "reset": true, 00:14:12.369 "compare": false, 00:14:12.369 "compare_and_write": false, 00:14:12.369 "abort": true, 00:14:12.369 "nvme_admin": false, 00:14:12.369 "nvme_io": false 00:14:12.369 }, 00:14:12.369 "memory_domains": [ 00:14:12.369 { 00:14:12.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.369 "dma_device_type": 2 00:14:12.369 } 00:14:12.369 ], 00:14:12.369 "driver_specific": {} 00:14:12.369 } 00:14:12.369 ] 00:14:12.655 10:39:39 -- common/autotest_common.sh@895 -- # return 0 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.655 10:39:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.914 10:39:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.914 "name": "Existed_Raid", 00:14:12.914 "uuid": "df748e19-bc76-46eb-a88b-954bcee7e342", 00:14:12.914 "strip_size_kb": 64, 00:14:12.914 "state": "online", 00:14:12.914 "raid_level": "raid0", 00:14:12.914 "superblock": true, 00:14:12.914 "num_base_bdevs": 2, 00:14:12.914 "num_base_bdevs_discovered": 2, 00:14:12.914 "num_base_bdevs_operational": 2, 00:14:12.914 "base_bdevs_list": [ 00:14:12.914 { 00:14:12.914 "name": "BaseBdev1", 00:14:12.914 "uuid": "5f8bd292-7471-44ca-ba30-5e0ccbc56630", 00:14:12.914 "is_configured": true, 00:14:12.914 "data_offset": 2048, 00:14:12.914 "data_size": 63488 00:14:12.914 }, 00:14:12.914 { 00:14:12.914 "name": "BaseBdev2", 00:14:12.914 "uuid": "78674469-8322-4d89-89de-dc4d0da1b911", 00:14:12.914 "is_configured": true, 00:14:12.914 "data_offset": 2048, 00:14:12.914 "data_size": 63488 00:14:12.914 } 00:14:12.914 ] 00:14:12.914 }' 00:14:12.914 10:39:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.914 10:39:39 -- common/autotest_common.sh@10 -- # set +x 00:14:13.480 10:39:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:13.738 [2024-07-24 10:39:40.278099] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.738 [2024-07-24 10:39:40.278440] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.738 [2024-07-24 10:39:40.278668] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.738 10:39:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.997 10:39:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.997 "name": "Existed_Raid", 00:14:13.997 "uuid": "df748e19-bc76-46eb-a88b-954bcee7e342", 00:14:13.997 "strip_size_kb": 64, 00:14:13.997 "state": "offline", 00:14:13.997 "raid_level": "raid0", 00:14:13.997 "superblock": true, 00:14:13.997 "num_base_bdevs": 2, 00:14:13.997 "num_base_bdevs_discovered": 1, 00:14:13.997 "num_base_bdevs_operational": 1, 00:14:13.997 "base_bdevs_list": [ 00:14:13.997 { 00:14:13.997 "name": null, 00:14:13.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.997 "is_configured": false, 00:14:13.997 "data_offset": 2048, 00:14:13.997 "data_size": 63488 00:14:13.997 }, 00:14:13.997 { 00:14:13.997 "name": "BaseBdev2", 00:14:13.997 "uuid": "78674469-8322-4d89-89de-dc4d0da1b911", 00:14:13.997 "is_configured": true, 00:14:13.997 "data_offset": 2048, 00:14:13.997 "data_size": 63488 00:14:13.997 } 00:14:13.997 ] 00:14:13.997 }' 00:14:13.997 10:39:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.997 10:39:40 -- common/autotest_common.sh@10 -- # set +x 00:14:14.563 10:39:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:14.563 10:39:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:14.563 10:39:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.563 10:39:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:14.821 10:39:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:14.821 10:39:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.821 10:39:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:15.078 [2024-07-24 10:39:41.751937] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.078 [2024-07-24 10:39:41.752406] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:14:15.336 10:39:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:15.336 10:39:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:15.336 10:39:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.336 10:39:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:15.594 10:39:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:15.595 10:39:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:15.595 10:39:42 -- bdev/bdev_raid.sh@287 -- # killprocess 123253 00:14:15.595 10:39:42 -- common/autotest_common.sh@926 -- # '[' -z 123253 ']' 00:14:15.595 10:39:42 -- common/autotest_common.sh@930 -- # kill -0 123253 00:14:15.595 10:39:42 -- common/autotest_common.sh@931 -- # uname 00:14:15.595 10:39:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:15.595 10:39:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123253 00:14:15.595 10:39:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:15.595 10:39:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:15.595 10:39:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123253' 00:14:15.595 killing process with pid 123253 00:14:15.595 10:39:42 -- common/autotest_common.sh@945 -- # kill 123253 00:14:15.595 10:39:42 -- common/autotest_common.sh@950 -- # wait 123253 00:14:15.595 [2024-07-24 10:39:42.045728] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.595 [2024-07-24 10:39:42.045830] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:15.866 00:14:15.866 real 0m11.024s 00:14:15.866 user 0m20.029s 00:14:15.866 sys 0m1.472s 00:14:15.866 10:39:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.866 10:39:42 -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 ************************************ 00:14:15.866 END TEST raid_state_function_test_sb 00:14:15.866 ************************************ 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:15.866 10:39:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:15.866 10:39:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.866 10:39:42 -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 ************************************ 00:14:15.866 START TEST raid_superblock_test 00:14:15.866 ************************************ 00:14:15.866 10:39:42 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@357 -- # raid_pid=123590 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123590 /var/tmp/spdk-raid.sock 00:14:15.866 10:39:42 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:15.866 10:39:42 -- common/autotest_common.sh@819 -- # '[' -z 123590 ']' 00:14:15.866 10:39:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:15.866 10:39:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:15.866 10:39:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:15.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:15.866 10:39:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:15.866 10:39:42 -- common/autotest_common.sh@10 -- # set +x 00:14:15.866 [2024-07-24 10:39:42.435010] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:15.866 [2024-07-24 10:39:42.435490] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123590 ] 00:14:16.123 [2024-07-24 10:39:42.582510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.123 [2024-07-24 10:39:42.707229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.124 [2024-07-24 10:39:42.768836] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.058 10:39:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:17.058 10:39:43 -- common/autotest_common.sh@852 -- # return 0 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:17.058 malloc1 00:14:17.058 10:39:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:17.316 [2024-07-24 10:39:43.933102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:17.316 [2024-07-24 10:39:43.933514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.316 [2024-07-24 10:39:43.933730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:17.316 [2024-07-24 10:39:43.933975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.316 [2024-07-24 10:39:43.937225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.316 [2024-07-24 10:39:43.937411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:17.316 pt1 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:17.316 10:39:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:17.574 malloc2 00:14:17.574 10:39:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:17.832 [2024-07-24 10:39:44.441218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:17.832 [2024-07-24 10:39:44.441537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.832 [2024-07-24 10:39:44.441701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:17.832 [2024-07-24 10:39:44.441862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.832 [2024-07-24 10:39:44.444748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.832 [2024-07-24 10:39:44.444935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:17.832 pt2 00:14:17.832 10:39:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:17.832 10:39:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:17.832 10:39:44 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:18.098 [2024-07-24 10:39:44.705454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:18.098 [2024-07-24 10:39:44.708030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:18.098 [2024-07-24 10:39:44.708430] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:14:18.098 [2024-07-24 10:39:44.708557] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:18.098 [2024-07-24 10:39:44.708832] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:18.098 [2024-07-24 10:39:44.709450] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:14:18.098 [2024-07-24 10:39:44.709607] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:14:18.098 [2024-07-24 10:39:44.709926] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.098 10:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.356 10:39:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:18.356 "name": "raid_bdev1", 00:14:18.356 "uuid": "1e14e0ae-11eb-4fb9-852c-13522cd1d540", 00:14:18.356 "strip_size_kb": 64, 00:14:18.356 "state": "online", 00:14:18.356 "raid_level": "raid0", 00:14:18.356 "superblock": true, 00:14:18.356 "num_base_bdevs": 2, 00:14:18.356 "num_base_bdevs_discovered": 2, 00:14:18.356 "num_base_bdevs_operational": 2, 00:14:18.356 "base_bdevs_list": [ 00:14:18.356 { 00:14:18.356 "name": "pt1", 00:14:18.356 "uuid": "e7b7b6ad-beb2-5552-9314-752fffad4d72", 00:14:18.356 "is_configured": true, 00:14:18.356 "data_offset": 2048, 00:14:18.356 "data_size": 63488 00:14:18.356 }, 00:14:18.356 { 00:14:18.356 "name": "pt2", 00:14:18.356 "uuid": "7794ace0-6079-5f4b-92d2-58969baf35b3", 00:14:18.356 "is_configured": true, 00:14:18.356 "data_offset": 2048, 00:14:18.356 "data_size": 63488 00:14:18.356 } 00:14:18.356 ] 00:14:18.356 }' 00:14:18.356 10:39:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:18.356 10:39:44 -- common/autotest_common.sh@10 -- # set +x 00:14:18.922 10:39:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:18.922 10:39:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:19.488 [2024-07-24 10:39:45.870458] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.488 10:39:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1e14e0ae-11eb-4fb9-852c-13522cd1d540 00:14:19.488 10:39:45 -- bdev/bdev_raid.sh@380 -- # '[' -z 1e14e0ae-11eb-4fb9-852c-13522cd1d540 ']' 00:14:19.488 10:39:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:19.488 [2024-07-24 10:39:46.094278] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:19.488 [2024-07-24 10:39:46.094557] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.488 [2024-07-24 10:39:46.094829] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.488 [2024-07-24 10:39:46.095009] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.488 [2024-07-24 10:39:46.095123] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:14:19.488 10:39:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.488 10:39:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:19.746 10:39:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:19.746 10:39:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:19.746 10:39:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:19.746 10:39:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:20.004 10:39:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:20.004 10:39:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:20.263 10:39:46 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:20.263 10:39:46 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:20.520 10:39:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:20.521 10:39:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:20.521 10:39:47 -- common/autotest_common.sh@640 -- # local es=0 00:14:20.521 10:39:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:20.521 10:39:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.521 10:39:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.521 10:39:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.521 10:39:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.521 10:39:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.521 10:39:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:20.521 10:39:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.521 10:39:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:20.521 10:39:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:20.779 [2024-07-24 10:39:47.302522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:20.779 [2024-07-24 10:39:47.305127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:20.779 [2024-07-24 10:39:47.305369] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:20.779 [2024-07-24 10:39:47.305612] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:20.779 [2024-07-24 10:39:47.305771] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.779 [2024-07-24 10:39:47.305876] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:14:20.779 request: 00:14:20.779 { 00:14:20.779 "name": "raid_bdev1", 00:14:20.779 "raid_level": "raid0", 00:14:20.779 "base_bdevs": [ 00:14:20.779 "malloc1", 00:14:20.779 "malloc2" 00:14:20.779 ], 00:14:20.779 "superblock": false, 00:14:20.779 "strip_size_kb": 64, 00:14:20.779 "method": "bdev_raid_create", 00:14:20.779 "req_id": 1 00:14:20.779 } 00:14:20.779 Got JSON-RPC error response 00:14:20.779 response: 00:14:20.779 { 00:14:20.779 "code": -17, 00:14:20.779 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:20.779 } 00:14:20.779 10:39:47 -- common/autotest_common.sh@643 -- # es=1 00:14:20.779 10:39:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:20.779 10:39:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:20.779 10:39:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:20.779 10:39:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.779 10:39:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:21.037 10:39:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:21.037 10:39:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:21.037 10:39:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:21.295 [2024-07-24 10:39:47.746506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:21.295 [2024-07-24 10:39:47.746847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.295 [2024-07-24 10:39:47.747025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:21.295 [2024-07-24 10:39:47.747185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.295 [2024-07-24 10:39:47.750271] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.295 [2024-07-24 10:39:47.750459] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:21.295 [2024-07-24 10:39:47.750710] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:21.295 [2024-07-24 10:39:47.750880] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.295 pt1 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.295 10:39:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.554 10:39:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.554 "name": "raid_bdev1", 00:14:21.554 "uuid": "1e14e0ae-11eb-4fb9-852c-13522cd1d540", 00:14:21.554 "strip_size_kb": 64, 00:14:21.554 "state": "configuring", 00:14:21.554 "raid_level": "raid0", 00:14:21.554 "superblock": true, 00:14:21.554 "num_base_bdevs": 2, 00:14:21.554 "num_base_bdevs_discovered": 1, 00:14:21.554 "num_base_bdevs_operational": 2, 00:14:21.555 "base_bdevs_list": [ 00:14:21.555 { 00:14:21.555 "name": "pt1", 00:14:21.555 "uuid": "e7b7b6ad-beb2-5552-9314-752fffad4d72", 00:14:21.555 "is_configured": true, 00:14:21.555 "data_offset": 2048, 00:14:21.555 "data_size": 63488 00:14:21.555 }, 00:14:21.555 { 00:14:21.555 "name": null, 00:14:21.555 "uuid": "7794ace0-6079-5f4b-92d2-58969baf35b3", 00:14:21.555 "is_configured": false, 00:14:21.555 "data_offset": 2048, 00:14:21.555 "data_size": 63488 00:14:21.555 } 00:14:21.555 ] 00:14:21.555 }' 00:14:21.555 10:39:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.555 10:39:48 -- common/autotest_common.sh@10 -- # set +x 00:14:22.121 10:39:48 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:22.121 10:39:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:22.121 10:39:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:22.121 10:39:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.378 [2024-07-24 10:39:48.875080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.378 [2024-07-24 10:39:48.875430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.378 [2024-07-24 10:39:48.875619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:22.378 [2024-07-24 10:39:48.875757] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.378 [2024-07-24 10:39:48.876455] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.378 [2024-07-24 10:39:48.876621] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.378 [2024-07-24 10:39:48.876829] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:22.378 [2024-07-24 10:39:48.876966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.378 [2024-07-24 10:39:48.877211] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:22.378 [2024-07-24 10:39:48.877315] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:22.378 [2024-07-24 10:39:48.877447] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:22.379 [2024-07-24 10:39:48.877863] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:22.379 [2024-07-24 10:39:48.877995] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:22.379 [2024-07-24 10:39:48.878225] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.379 pt2 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.379 10:39:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.637 10:39:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:22.637 "name": "raid_bdev1", 00:14:22.637 "uuid": "1e14e0ae-11eb-4fb9-852c-13522cd1d540", 00:14:22.637 "strip_size_kb": 64, 00:14:22.637 "state": "online", 00:14:22.637 "raid_level": "raid0", 00:14:22.637 "superblock": true, 00:14:22.637 "num_base_bdevs": 2, 00:14:22.637 "num_base_bdevs_discovered": 2, 00:14:22.637 "num_base_bdevs_operational": 2, 00:14:22.637 "base_bdevs_list": [ 00:14:22.637 { 00:14:22.637 "name": "pt1", 00:14:22.637 "uuid": "e7b7b6ad-beb2-5552-9314-752fffad4d72", 00:14:22.637 "is_configured": true, 00:14:22.637 "data_offset": 2048, 00:14:22.637 "data_size": 63488 00:14:22.637 }, 00:14:22.637 { 00:14:22.637 "name": "pt2", 00:14:22.637 "uuid": "7794ace0-6079-5f4b-92d2-58969baf35b3", 00:14:22.637 "is_configured": true, 00:14:22.637 "data_offset": 2048, 00:14:22.637 "data_size": 63488 00:14:22.637 } 00:14:22.637 ] 00:14:22.637 }' 00:14:22.637 10:39:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:22.637 10:39:49 -- common/autotest_common.sh@10 -- # set +x 00:14:23.213 10:39:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.213 10:39:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:23.486 [2024-07-24 10:39:50.003712] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.486 10:39:50 -- bdev/bdev_raid.sh@430 -- # '[' 1e14e0ae-11eb-4fb9-852c-13522cd1d540 '!=' 1e14e0ae-11eb-4fb9-852c-13522cd1d540 ']' 00:14:23.486 10:39:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:23.486 10:39:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:23.486 10:39:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:23.486 10:39:50 -- bdev/bdev_raid.sh@511 -- # killprocess 123590 00:14:23.486 10:39:50 -- common/autotest_common.sh@926 -- # '[' -z 123590 ']' 00:14:23.486 10:39:50 -- common/autotest_common.sh@930 -- # kill -0 123590 00:14:23.486 10:39:50 -- common/autotest_common.sh@931 -- # uname 00:14:23.486 10:39:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:23.486 10:39:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123590 00:14:23.486 10:39:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:23.486 10:39:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:23.486 10:39:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123590' 00:14:23.486 killing process with pid 123590 00:14:23.486 10:39:50 -- common/autotest_common.sh@945 -- # kill 123590 00:14:23.486 [2024-07-24 10:39:50.053395] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:23.486 10:39:50 -- common/autotest_common.sh@950 -- # wait 123590 00:14:23.486 [2024-07-24 10:39:50.053683] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.486 [2024-07-24 10:39:50.053862] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.486 [2024-07-24 10:39:50.053956] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:23.486 [2024-07-24 10:39:50.085083] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:23.744 ************************************ 00:14:23.744 END TEST raid_superblock_test 00:14:23.744 ************************************ 00:14:23.744 10:39:50 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:23.744 00:14:23.744 real 0m8.037s 00:14:23.744 user 0m14.315s 00:14:23.744 sys 0m1.117s 00:14:23.744 10:39:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.744 10:39:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:24.003 10:39:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:24.003 10:39:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:24.003 10:39:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.003 ************************************ 00:14:24.003 START TEST raid_state_function_test 00:14:24.003 ************************************ 00:14:24.003 10:39:50 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=123825 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123825' 00:14:24.003 Process raid pid: 123825 00:14:24.003 10:39:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123825 /var/tmp/spdk-raid.sock 00:14:24.003 10:39:50 -- common/autotest_common.sh@819 -- # '[' -z 123825 ']' 00:14:24.003 10:39:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:24.003 10:39:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:24.003 10:39:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:24.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:24.003 10:39:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:24.003 10:39:50 -- common/autotest_common.sh@10 -- # set +x 00:14:24.003 [2024-07-24 10:39:50.529960] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:24.003 [2024-07-24 10:39:50.530417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.003 [2024-07-24 10:39:50.675010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.262 [2024-07-24 10:39:50.785721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.262 [2024-07-24 10:39:50.862101] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:24.828 10:39:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:24.828 10:39:51 -- common/autotest_common.sh@852 -- # return 0 00:14:24.829 10:39:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:25.087 [2024-07-24 10:39:51.623052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.087 [2024-07-24 10:39:51.623437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.087 [2024-07-24 10:39:51.623584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:25.087 [2024-07-24 10:39:51.623656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.087 10:39:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.345 10:39:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.345 "name": "Existed_Raid", 00:14:25.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.345 "strip_size_kb": 64, 00:14:25.345 "state": "configuring", 00:14:25.345 "raid_level": "concat", 00:14:25.345 "superblock": false, 00:14:25.345 "num_base_bdevs": 2, 00:14:25.345 "num_base_bdevs_discovered": 0, 00:14:25.345 "num_base_bdevs_operational": 2, 00:14:25.345 "base_bdevs_list": [ 00:14:25.345 { 00:14:25.345 "name": "BaseBdev1", 00:14:25.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.345 "is_configured": false, 00:14:25.345 "data_offset": 0, 00:14:25.345 "data_size": 0 00:14:25.345 }, 00:14:25.345 { 00:14:25.345 "name": "BaseBdev2", 00:14:25.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.345 "is_configured": false, 00:14:25.345 "data_offset": 0, 00:14:25.345 "data_size": 0 00:14:25.345 } 00:14:25.345 ] 00:14:25.345 }' 00:14:25.345 10:39:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.345 10:39:51 -- common/autotest_common.sh@10 -- # set +x 00:14:25.911 10:39:52 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:26.169 [2024-07-24 10:39:52.819150] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.169 [2024-07-24 10:39:52.819458] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:26.169 10:39:52 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:26.427 [2024-07-24 10:39:53.039292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:26.427 [2024-07-24 10:39:53.039731] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:26.427 [2024-07-24 10:39:53.039887] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:26.427 [2024-07-24 10:39:53.039962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:26.427 10:39:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.685 [2024-07-24 10:39:53.315275] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.685 BaseBdev1 00:14:26.685 10:39:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:26.685 10:39:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:26.685 10:39:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:26.685 10:39:53 -- common/autotest_common.sh@889 -- # local i 00:14:26.685 10:39:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:26.685 10:39:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:26.685 10:39:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:26.943 10:39:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:27.203 [ 00:14:27.203 { 00:14:27.203 "name": "BaseBdev1", 00:14:27.203 "aliases": [ 00:14:27.203 "6e6b6d8a-bb5e-446b-990b-cff147b80f0a" 00:14:27.203 ], 00:14:27.203 "product_name": "Malloc disk", 00:14:27.203 "block_size": 512, 00:14:27.203 "num_blocks": 65536, 00:14:27.203 "uuid": "6e6b6d8a-bb5e-446b-990b-cff147b80f0a", 00:14:27.203 "assigned_rate_limits": { 00:14:27.203 "rw_ios_per_sec": 0, 00:14:27.203 "rw_mbytes_per_sec": 0, 00:14:27.203 "r_mbytes_per_sec": 0, 00:14:27.203 "w_mbytes_per_sec": 0 00:14:27.203 }, 00:14:27.203 "claimed": true, 00:14:27.203 "claim_type": "exclusive_write", 00:14:27.203 "zoned": false, 00:14:27.203 "supported_io_types": { 00:14:27.203 "read": true, 00:14:27.203 "write": true, 00:14:27.203 "unmap": true, 00:14:27.203 "write_zeroes": true, 00:14:27.203 "flush": true, 00:14:27.203 "reset": true, 00:14:27.203 "compare": false, 00:14:27.203 "compare_and_write": false, 00:14:27.203 "abort": true, 00:14:27.203 "nvme_admin": false, 00:14:27.203 "nvme_io": false 00:14:27.203 }, 00:14:27.203 "memory_domains": [ 00:14:27.203 { 00:14:27.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.203 "dma_device_type": 2 00:14:27.203 } 00:14:27.203 ], 00:14:27.203 "driver_specific": {} 00:14:27.203 } 00:14:27.203 ] 00:14:27.203 10:39:53 -- common/autotest_common.sh@895 -- # return 0 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.204 10:39:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.475 10:39:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.475 "name": "Existed_Raid", 00:14:27.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.476 "strip_size_kb": 64, 00:14:27.476 "state": "configuring", 00:14:27.476 "raid_level": "concat", 00:14:27.476 "superblock": false, 00:14:27.476 "num_base_bdevs": 2, 00:14:27.476 "num_base_bdevs_discovered": 1, 00:14:27.476 "num_base_bdevs_operational": 2, 00:14:27.476 "base_bdevs_list": [ 00:14:27.476 { 00:14:27.476 "name": "BaseBdev1", 00:14:27.476 "uuid": "6e6b6d8a-bb5e-446b-990b-cff147b80f0a", 00:14:27.476 "is_configured": true, 00:14:27.476 "data_offset": 0, 00:14:27.476 "data_size": 65536 00:14:27.476 }, 00:14:27.476 { 00:14:27.476 "name": "BaseBdev2", 00:14:27.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.476 "is_configured": false, 00:14:27.476 "data_offset": 0, 00:14:27.476 "data_size": 0 00:14:27.476 } 00:14:27.476 ] 00:14:27.476 }' 00:14:27.476 10:39:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.476 10:39:54 -- common/autotest_common.sh@10 -- # set +x 00:14:28.042 10:39:54 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:28.301 [2024-07-24 10:39:54.867845] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:28.301 [2024-07-24 10:39:54.868262] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:28.301 10:39:54 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:28.301 10:39:54 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:28.559 [2024-07-24 10:39:55.132042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.559 [2024-07-24 10:39:55.134849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:28.559 [2024-07-24 10:39:55.135065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.559 10:39:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.817 10:39:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:28.817 "name": "Existed_Raid", 00:14:28.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.817 "strip_size_kb": 64, 00:14:28.817 "state": "configuring", 00:14:28.817 "raid_level": "concat", 00:14:28.817 "superblock": false, 00:14:28.817 "num_base_bdevs": 2, 00:14:28.817 "num_base_bdevs_discovered": 1, 00:14:28.817 "num_base_bdevs_operational": 2, 00:14:28.817 "base_bdevs_list": [ 00:14:28.817 { 00:14:28.817 "name": "BaseBdev1", 00:14:28.817 "uuid": "6e6b6d8a-bb5e-446b-990b-cff147b80f0a", 00:14:28.817 "is_configured": true, 00:14:28.817 "data_offset": 0, 00:14:28.817 "data_size": 65536 00:14:28.817 }, 00:14:28.817 { 00:14:28.817 "name": "BaseBdev2", 00:14:28.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.817 "is_configured": false, 00:14:28.817 "data_offset": 0, 00:14:28.817 "data_size": 0 00:14:28.817 } 00:14:28.817 ] 00:14:28.817 }' 00:14:28.817 10:39:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:28.817 10:39:55 -- common/autotest_common.sh@10 -- # set +x 00:14:29.383 10:39:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:29.642 [2024-07-24 10:39:56.276624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:29.642 [2024-07-24 10:39:56.277029] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:29.642 [2024-07-24 10:39:56.277183] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:29.642 [2024-07-24 10:39:56.277452] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:29.642 [2024-07-24 10:39:56.278190] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:29.642 [2024-07-24 10:39:56.278376] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:29.642 [2024-07-24 10:39:56.278900] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.642 BaseBdev2 00:14:29.642 10:39:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:29.642 10:39:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:29.642 10:39:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.642 10:39:56 -- common/autotest_common.sh@889 -- # local i 00:14:29.642 10:39:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.642 10:39:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.642 10:39:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:29.900 10:39:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.158 [ 00:14:30.158 { 00:14:30.158 "name": "BaseBdev2", 00:14:30.158 "aliases": [ 00:14:30.158 "ebe20999-c774-4db7-b257-ae2774f4a6b1" 00:14:30.158 ], 00:14:30.158 "product_name": "Malloc disk", 00:14:30.158 "block_size": 512, 00:14:30.158 "num_blocks": 65536, 00:14:30.158 "uuid": "ebe20999-c774-4db7-b257-ae2774f4a6b1", 00:14:30.158 "assigned_rate_limits": { 00:14:30.158 "rw_ios_per_sec": 0, 00:14:30.158 "rw_mbytes_per_sec": 0, 00:14:30.158 "r_mbytes_per_sec": 0, 00:14:30.158 "w_mbytes_per_sec": 0 00:14:30.158 }, 00:14:30.158 "claimed": true, 00:14:30.158 "claim_type": "exclusive_write", 00:14:30.158 "zoned": false, 00:14:30.159 "supported_io_types": { 00:14:30.159 "read": true, 00:14:30.159 "write": true, 00:14:30.159 "unmap": true, 00:14:30.159 "write_zeroes": true, 00:14:30.159 "flush": true, 00:14:30.159 "reset": true, 00:14:30.159 "compare": false, 00:14:30.159 "compare_and_write": false, 00:14:30.159 "abort": true, 00:14:30.159 "nvme_admin": false, 00:14:30.159 "nvme_io": false 00:14:30.159 }, 00:14:30.159 "memory_domains": [ 00:14:30.159 { 00:14:30.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.159 "dma_device_type": 2 00:14:30.159 } 00:14:30.159 ], 00:14:30.159 "driver_specific": {} 00:14:30.159 } 00:14:30.159 ] 00:14:30.159 10:39:56 -- common/autotest_common.sh@895 -- # return 0 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.159 10:39:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.417 10:39:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.417 "name": "Existed_Raid", 00:14:30.417 "uuid": "4bfd934f-fce7-43a7-ada4-a31ae07821e3", 00:14:30.417 "strip_size_kb": 64, 00:14:30.417 "state": "online", 00:14:30.417 "raid_level": "concat", 00:14:30.417 "superblock": false, 00:14:30.417 "num_base_bdevs": 2, 00:14:30.417 "num_base_bdevs_discovered": 2, 00:14:30.417 "num_base_bdevs_operational": 2, 00:14:30.417 "base_bdevs_list": [ 00:14:30.417 { 00:14:30.417 "name": "BaseBdev1", 00:14:30.417 "uuid": "6e6b6d8a-bb5e-446b-990b-cff147b80f0a", 00:14:30.417 "is_configured": true, 00:14:30.417 "data_offset": 0, 00:14:30.417 "data_size": 65536 00:14:30.417 }, 00:14:30.417 { 00:14:30.417 "name": "BaseBdev2", 00:14:30.417 "uuid": "ebe20999-c774-4db7-b257-ae2774f4a6b1", 00:14:30.417 "is_configured": true, 00:14:30.417 "data_offset": 0, 00:14:30.417 "data_size": 65536 00:14:30.417 } 00:14:30.417 ] 00:14:30.417 }' 00:14:30.417 10:39:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.417 10:39:56 -- common/autotest_common.sh@10 -- # set +x 00:14:30.985 10:39:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:31.244 [2024-07-24 10:39:57.789351] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.244 [2024-07-24 10:39:57.789717] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.244 [2024-07-24 10:39:57.789960] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.244 10:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.504 10:39:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:31.504 "name": "Existed_Raid", 00:14:31.504 "uuid": "4bfd934f-fce7-43a7-ada4-a31ae07821e3", 00:14:31.504 "strip_size_kb": 64, 00:14:31.504 "state": "offline", 00:14:31.504 "raid_level": "concat", 00:14:31.504 "superblock": false, 00:14:31.504 "num_base_bdevs": 2, 00:14:31.504 "num_base_bdevs_discovered": 1, 00:14:31.504 "num_base_bdevs_operational": 1, 00:14:31.504 "base_bdevs_list": [ 00:14:31.504 { 00:14:31.504 "name": null, 00:14:31.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.504 "is_configured": false, 00:14:31.504 "data_offset": 0, 00:14:31.504 "data_size": 65536 00:14:31.504 }, 00:14:31.504 { 00:14:31.504 "name": "BaseBdev2", 00:14:31.504 "uuid": "ebe20999-c774-4db7-b257-ae2774f4a6b1", 00:14:31.504 "is_configured": true, 00:14:31.504 "data_offset": 0, 00:14:31.504 "data_size": 65536 00:14:31.504 } 00:14:31.504 ] 00:14:31.504 }' 00:14:31.504 10:39:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:31.504 10:39:58 -- common/autotest_common.sh@10 -- # set +x 00:14:32.070 10:39:58 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:32.070 10:39:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:32.070 10:39:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.070 10:39:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:32.328 10:39:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:32.328 10:39:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.328 10:39:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:32.587 [2024-07-24 10:39:59.210378] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.587 [2024-07-24 10:39:59.210700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:14:32.587 10:39:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:32.587 10:39:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:32.587 10:39:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.587 10:39:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:32.846 10:39:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:32.846 10:39:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:32.846 10:39:59 -- bdev/bdev_raid.sh@287 -- # killprocess 123825 00:14:32.846 10:39:59 -- common/autotest_common.sh@926 -- # '[' -z 123825 ']' 00:14:32.846 10:39:59 -- common/autotest_common.sh@930 -- # kill -0 123825 00:14:32.846 10:39:59 -- common/autotest_common.sh@931 -- # uname 00:14:32.846 10:39:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.846 10:39:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123825 00:14:32.846 10:39:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:32.846 10:39:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:32.846 10:39:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123825' 00:14:32.846 killing process with pid 123825 00:14:32.846 10:39:59 -- common/autotest_common.sh@945 -- # kill 123825 00:14:32.846 10:39:59 -- common/autotest_common.sh@950 -- # wait 123825 00:14:32.846 [2024-07-24 10:39:59.508675] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.846 [2024-07-24 10:39:59.508813] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:33.414 00:14:33.414 real 0m9.363s 00:14:33.414 user 0m16.781s 00:14:33.414 sys 0m1.304s 00:14:33.414 10:39:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.414 10:39:59 -- common/autotest_common.sh@10 -- # set +x 00:14:33.414 ************************************ 00:14:33.414 END TEST raid_state_function_test 00:14:33.414 ************************************ 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:33.414 10:39:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:33.414 10:39:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.414 10:39:59 -- common/autotest_common.sh@10 -- # set +x 00:14:33.414 ************************************ 00:14:33.414 START TEST raid_state_function_test_sb 00:14:33.414 ************************************ 00:14:33.414 10:39:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=124141 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124141' 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:33.414 Process raid pid: 124141 00:14:33.414 10:39:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124141 /var/tmp/spdk-raid.sock 00:14:33.414 10:39:59 -- common/autotest_common.sh@819 -- # '[' -z 124141 ']' 00:14:33.414 10:39:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:33.414 10:39:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.414 10:39:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:33.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:33.414 10:39:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.414 10:39:59 -- common/autotest_common.sh@10 -- # set +x 00:14:33.414 [2024-07-24 10:39:59.964798] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:33.414 [2024-07-24 10:39:59.965311] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.672 [2024-07-24 10:40:00.112782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.672 [2024-07-24 10:40:00.238282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.672 [2024-07-24 10:40:00.310821] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.240 10:40:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.240 10:40:00 -- common/autotest_common.sh@852 -- # return 0 00:14:34.240 10:40:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:34.499 [2024-07-24 10:40:01.159401] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.499 [2024-07-24 10:40:01.159857] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.499 [2024-07-24 10:40:01.160039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.499 [2024-07-24 10:40:01.160109] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:34.499 10:40:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.757 10:40:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.757 10:40:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.757 10:40:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.757 10:40:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.757 10:40:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.015 10:40:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.015 "name": "Existed_Raid", 00:14:35.015 "uuid": "9d92150b-c789-46b4-9182-dfc974f3d37b", 00:14:35.015 "strip_size_kb": 64, 00:14:35.015 "state": "configuring", 00:14:35.015 "raid_level": "concat", 00:14:35.015 "superblock": true, 00:14:35.015 "num_base_bdevs": 2, 00:14:35.015 "num_base_bdevs_discovered": 0, 00:14:35.015 "num_base_bdevs_operational": 2, 00:14:35.015 "base_bdevs_list": [ 00:14:35.015 { 00:14:35.015 "name": "BaseBdev1", 00:14:35.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.015 "is_configured": false, 00:14:35.015 "data_offset": 0, 00:14:35.015 "data_size": 0 00:14:35.015 }, 00:14:35.015 { 00:14:35.015 "name": "BaseBdev2", 00:14:35.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.015 "is_configured": false, 00:14:35.015 "data_offset": 0, 00:14:35.015 "data_size": 0 00:14:35.015 } 00:14:35.015 ] 00:14:35.015 }' 00:14:35.015 10:40:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.015 10:40:01 -- common/autotest_common.sh@10 -- # set +x 00:14:35.598 10:40:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.868 [2024-07-24 10:40:02.335521] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.868 [2024-07-24 10:40:02.335880] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:35.868 10:40:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:36.127 [2024-07-24 10:40:02.591680] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.127 [2024-07-24 10:40:02.592030] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.127 [2024-07-24 10:40:02.592187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.127 [2024-07-24 10:40:02.592286] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.127 10:40:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:36.385 [2024-07-24 10:40:02.838104] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.385 BaseBdev1 00:14:36.385 10:40:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:36.385 10:40:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:36.385 10:40:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:36.385 10:40:02 -- common/autotest_common.sh@889 -- # local i 00:14:36.385 10:40:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:36.385 10:40:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:36.385 10:40:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:36.644 10:40:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:36.644 [ 00:14:36.644 { 00:14:36.644 "name": "BaseBdev1", 00:14:36.644 "aliases": [ 00:14:36.644 "159c0154-ea4e-434e-b473-aa8ba8001105" 00:14:36.644 ], 00:14:36.644 "product_name": "Malloc disk", 00:14:36.644 "block_size": 512, 00:14:36.644 "num_blocks": 65536, 00:14:36.644 "uuid": "159c0154-ea4e-434e-b473-aa8ba8001105", 00:14:36.644 "assigned_rate_limits": { 00:14:36.644 "rw_ios_per_sec": 0, 00:14:36.644 "rw_mbytes_per_sec": 0, 00:14:36.644 "r_mbytes_per_sec": 0, 00:14:36.644 "w_mbytes_per_sec": 0 00:14:36.644 }, 00:14:36.644 "claimed": true, 00:14:36.644 "claim_type": "exclusive_write", 00:14:36.644 "zoned": false, 00:14:36.644 "supported_io_types": { 00:14:36.644 "read": true, 00:14:36.644 "write": true, 00:14:36.644 "unmap": true, 00:14:36.644 "write_zeroes": true, 00:14:36.644 "flush": true, 00:14:36.644 "reset": true, 00:14:36.644 "compare": false, 00:14:36.644 "compare_and_write": false, 00:14:36.644 "abort": true, 00:14:36.644 "nvme_admin": false, 00:14:36.644 "nvme_io": false 00:14:36.644 }, 00:14:36.644 "memory_domains": [ 00:14:36.644 { 00:14:36.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.644 "dma_device_type": 2 00:14:36.644 } 00:14:36.644 ], 00:14:36.644 "driver_specific": {} 00:14:36.644 } 00:14:36.644 ] 00:14:36.903 10:40:03 -- common/autotest_common.sh@895 -- # return 0 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.903 10:40:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.161 10:40:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.161 "name": "Existed_Raid", 00:14:37.161 "uuid": "097f01a1-5b3b-4552-bd41-cdc4e4d26684", 00:14:37.161 "strip_size_kb": 64, 00:14:37.161 "state": "configuring", 00:14:37.161 "raid_level": "concat", 00:14:37.161 "superblock": true, 00:14:37.161 "num_base_bdevs": 2, 00:14:37.161 "num_base_bdevs_discovered": 1, 00:14:37.161 "num_base_bdevs_operational": 2, 00:14:37.161 "base_bdevs_list": [ 00:14:37.161 { 00:14:37.161 "name": "BaseBdev1", 00:14:37.161 "uuid": "159c0154-ea4e-434e-b473-aa8ba8001105", 00:14:37.161 "is_configured": true, 00:14:37.161 "data_offset": 2048, 00:14:37.161 "data_size": 63488 00:14:37.161 }, 00:14:37.161 { 00:14:37.161 "name": "BaseBdev2", 00:14:37.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.161 "is_configured": false, 00:14:37.161 "data_offset": 0, 00:14:37.161 "data_size": 0 00:14:37.161 } 00:14:37.161 ] 00:14:37.161 }' 00:14:37.161 10:40:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.161 10:40:03 -- common/autotest_common.sh@10 -- # set +x 00:14:37.743 10:40:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:37.743 [2024-07-24 10:40:04.422605] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.743 [2024-07-24 10:40:04.422950] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:38.001 10:40:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:38.001 10:40:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:38.260 10:40:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:38.260 BaseBdev1 00:14:38.519 10:40:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:38.519 10:40:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:38.519 10:40:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.519 10:40:04 -- common/autotest_common.sh@889 -- # local i 00:14:38.519 10:40:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.519 10:40:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.519 10:40:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.777 10:40:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:38.777 [ 00:14:38.777 { 00:14:38.777 "name": "BaseBdev1", 00:14:38.777 "aliases": [ 00:14:38.777 "96d3ef7d-d0d4-4fe5-ae2a-e68d58a97d87" 00:14:38.777 ], 00:14:38.777 "product_name": "Malloc disk", 00:14:38.777 "block_size": 512, 00:14:38.777 "num_blocks": 65536, 00:14:38.777 "uuid": "96d3ef7d-d0d4-4fe5-ae2a-e68d58a97d87", 00:14:38.777 "assigned_rate_limits": { 00:14:38.777 "rw_ios_per_sec": 0, 00:14:38.777 "rw_mbytes_per_sec": 0, 00:14:38.777 "r_mbytes_per_sec": 0, 00:14:38.777 "w_mbytes_per_sec": 0 00:14:38.777 }, 00:14:38.777 "claimed": false, 00:14:38.777 "zoned": false, 00:14:38.777 "supported_io_types": { 00:14:38.777 "read": true, 00:14:38.777 "write": true, 00:14:38.777 "unmap": true, 00:14:38.777 "write_zeroes": true, 00:14:38.777 "flush": true, 00:14:38.777 "reset": true, 00:14:38.777 "compare": false, 00:14:38.777 "compare_and_write": false, 00:14:38.777 "abort": true, 00:14:38.777 "nvme_admin": false, 00:14:38.777 "nvme_io": false 00:14:38.777 }, 00:14:38.777 "memory_domains": [ 00:14:38.777 { 00:14:38.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.777 "dma_device_type": 2 00:14:38.777 } 00:14:38.777 ], 00:14:38.777 "driver_specific": {} 00:14:38.777 } 00:14:38.777 ] 00:14:38.777 10:40:05 -- common/autotest_common.sh@895 -- # return 0 00:14:38.777 10:40:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:39.036 [2024-07-24 10:40:05.639224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.036 [2024-07-24 10:40:05.642134] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.036 [2024-07-24 10:40:05.642378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.036 10:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.294 10:40:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:39.294 "name": "Existed_Raid", 00:14:39.294 "uuid": "062943e5-c353-4990-a5ec-b610d28372f4", 00:14:39.294 "strip_size_kb": 64, 00:14:39.294 "state": "configuring", 00:14:39.294 "raid_level": "concat", 00:14:39.294 "superblock": true, 00:14:39.294 "num_base_bdevs": 2, 00:14:39.294 "num_base_bdevs_discovered": 1, 00:14:39.294 "num_base_bdevs_operational": 2, 00:14:39.294 "base_bdevs_list": [ 00:14:39.294 { 00:14:39.294 "name": "BaseBdev1", 00:14:39.294 "uuid": "96d3ef7d-d0d4-4fe5-ae2a-e68d58a97d87", 00:14:39.294 "is_configured": true, 00:14:39.294 "data_offset": 2048, 00:14:39.294 "data_size": 63488 00:14:39.294 }, 00:14:39.294 { 00:14:39.295 "name": "BaseBdev2", 00:14:39.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.295 "is_configured": false, 00:14:39.295 "data_offset": 0, 00:14:39.295 "data_size": 0 00:14:39.295 } 00:14:39.295 ] 00:14:39.295 }' 00:14:39.295 10:40:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:39.295 10:40:05 -- common/autotest_common.sh@10 -- # set +x 00:14:40.255 10:40:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:40.255 [2024-07-24 10:40:06.824555] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.255 [2024-07-24 10:40:06.825145] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:14:40.255 [2024-07-24 10:40:06.825317] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:40.255 [2024-07-24 10:40:06.825532] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:14:40.255 [2024-07-24 10:40:06.826073] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:14:40.255 BaseBdev2 00:14:40.255 [2024-07-24 10:40:06.826205] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:14:40.255 [2024-07-24 10:40:06.826408] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.255 10:40:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:40.255 10:40:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:40.255 10:40:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:40.255 10:40:06 -- common/autotest_common.sh@889 -- # local i 00:14:40.255 10:40:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:40.255 10:40:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:40.255 10:40:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:40.513 10:40:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.772 [ 00:14:40.772 { 00:14:40.772 "name": "BaseBdev2", 00:14:40.772 "aliases": [ 00:14:40.772 "a06c56ee-6b61-4e2b-8e52-248b45907a3f" 00:14:40.772 ], 00:14:40.772 "product_name": "Malloc disk", 00:14:40.772 "block_size": 512, 00:14:40.772 "num_blocks": 65536, 00:14:40.772 "uuid": "a06c56ee-6b61-4e2b-8e52-248b45907a3f", 00:14:40.772 "assigned_rate_limits": { 00:14:40.772 "rw_ios_per_sec": 0, 00:14:40.772 "rw_mbytes_per_sec": 0, 00:14:40.772 "r_mbytes_per_sec": 0, 00:14:40.772 "w_mbytes_per_sec": 0 00:14:40.772 }, 00:14:40.772 "claimed": true, 00:14:40.772 "claim_type": "exclusive_write", 00:14:40.772 "zoned": false, 00:14:40.772 "supported_io_types": { 00:14:40.772 "read": true, 00:14:40.772 "write": true, 00:14:40.772 "unmap": true, 00:14:40.772 "write_zeroes": true, 00:14:40.772 "flush": true, 00:14:40.772 "reset": true, 00:14:40.772 "compare": false, 00:14:40.772 "compare_and_write": false, 00:14:40.772 "abort": true, 00:14:40.772 "nvme_admin": false, 00:14:40.772 "nvme_io": false 00:14:40.772 }, 00:14:40.772 "memory_domains": [ 00:14:40.772 { 00:14:40.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.772 "dma_device_type": 2 00:14:40.772 } 00:14:40.772 ], 00:14:40.772 "driver_specific": {} 00:14:40.772 } 00:14:40.772 ] 00:14:40.772 10:40:07 -- common/autotest_common.sh@895 -- # return 0 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.772 10:40:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.030 10:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:41.030 "name": "Existed_Raid", 00:14:41.030 "uuid": "062943e5-c353-4990-a5ec-b610d28372f4", 00:14:41.030 "strip_size_kb": 64, 00:14:41.030 "state": "online", 00:14:41.030 "raid_level": "concat", 00:14:41.030 "superblock": true, 00:14:41.030 "num_base_bdevs": 2, 00:14:41.030 "num_base_bdevs_discovered": 2, 00:14:41.030 "num_base_bdevs_operational": 2, 00:14:41.030 "base_bdevs_list": [ 00:14:41.030 { 00:14:41.030 "name": "BaseBdev1", 00:14:41.030 "uuid": "96d3ef7d-d0d4-4fe5-ae2a-e68d58a97d87", 00:14:41.030 "is_configured": true, 00:14:41.030 "data_offset": 2048, 00:14:41.030 "data_size": 63488 00:14:41.030 }, 00:14:41.030 { 00:14:41.030 "name": "BaseBdev2", 00:14:41.030 "uuid": "a06c56ee-6b61-4e2b-8e52-248b45907a3f", 00:14:41.030 "is_configured": true, 00:14:41.030 "data_offset": 2048, 00:14:41.030 "data_size": 63488 00:14:41.030 } 00:14:41.030 ] 00:14:41.030 }' 00:14:41.030 10:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:41.030 10:40:07 -- common/autotest_common.sh@10 -- # set +x 00:14:41.964 10:40:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:41.965 [2024-07-24 10:40:08.552454] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.965 [2024-07-24 10:40:08.552806] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.965 [2024-07-24 10:40:08.553051] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.965 10:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.223 10:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.223 "name": "Existed_Raid", 00:14:42.223 "uuid": "062943e5-c353-4990-a5ec-b610d28372f4", 00:14:42.223 "strip_size_kb": 64, 00:14:42.223 "state": "offline", 00:14:42.223 "raid_level": "concat", 00:14:42.223 "superblock": true, 00:14:42.223 "num_base_bdevs": 2, 00:14:42.223 "num_base_bdevs_discovered": 1, 00:14:42.223 "num_base_bdevs_operational": 1, 00:14:42.223 "base_bdevs_list": [ 00:14:42.223 { 00:14:42.223 "name": null, 00:14:42.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.223 "is_configured": false, 00:14:42.223 "data_offset": 2048, 00:14:42.223 "data_size": 63488 00:14:42.223 }, 00:14:42.223 { 00:14:42.223 "name": "BaseBdev2", 00:14:42.223 "uuid": "a06c56ee-6b61-4e2b-8e52-248b45907a3f", 00:14:42.223 "is_configured": true, 00:14:42.223 "data_offset": 2048, 00:14:42.223 "data_size": 63488 00:14:42.223 } 00:14:42.223 ] 00:14:42.223 }' 00:14:42.223 10:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.223 10:40:08 -- common/autotest_common.sh@10 -- # set +x 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.157 10:40:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:43.415 [2024-07-24 10:40:10.054317] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.415 [2024-07-24 10:40:10.054746] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:14:43.415 10:40:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:43.415 10:40:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:43.415 10:40:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.415 10:40:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:43.982 10:40:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:43.982 10:40:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:43.982 10:40:10 -- bdev/bdev_raid.sh@287 -- # killprocess 124141 00:14:43.982 10:40:10 -- common/autotest_common.sh@926 -- # '[' -z 124141 ']' 00:14:43.982 10:40:10 -- common/autotest_common.sh@930 -- # kill -0 124141 00:14:43.982 10:40:10 -- common/autotest_common.sh@931 -- # uname 00:14:43.982 10:40:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:43.982 10:40:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124141 00:14:43.982 killing process with pid 124141 00:14:43.982 10:40:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:43.982 10:40:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:43.982 10:40:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124141' 00:14:43.982 10:40:10 -- common/autotest_common.sh@945 -- # kill 124141 00:14:43.982 10:40:10 -- common/autotest_common.sh@950 -- # wait 124141 00:14:43.982 [2024-07-24 10:40:10.400025] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:43.982 [2024-07-24 10:40:10.400151] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:44.244 ************************************ 00:14:44.244 END TEST raid_state_function_test_sb 00:14:44.244 ************************************ 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:44.244 00:14:44.244 real 0m10.833s 00:14:44.244 user 0m19.571s 00:14:44.244 sys 0m1.457s 00:14:44.244 10:40:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.244 10:40:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:44.244 10:40:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:14:44.244 10:40:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.244 10:40:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.244 ************************************ 00:14:44.244 START TEST raid_superblock_test 00:14:44.244 ************************************ 00:14:44.244 10:40:10 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@357 -- # raid_pid=124470 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124470 /var/tmp/spdk-raid.sock 00:14:44.244 10:40:10 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:44.244 10:40:10 -- common/autotest_common.sh@819 -- # '[' -z 124470 ']' 00:14:44.244 10:40:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:44.244 10:40:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.244 10:40:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:44.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:44.244 10:40:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.244 10:40:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.244 [2024-07-24 10:40:10.843412] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:44.244 [2024-07-24 10:40:10.843902] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124470 ] 00:14:44.515 [2024-07-24 10:40:10.986473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.515 [2024-07-24 10:40:11.080029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.515 [2024-07-24 10:40:11.155462] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.203 10:40:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:46.203 10:40:11 -- common/autotest_common.sh@852 -- # return 0 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:46.203 10:40:11 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:46.203 malloc1 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:46.203 [2024-07-24 10:40:12.253120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:46.203 [2024-07-24 10:40:12.253970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.203 [2024-07-24 10:40:12.254077] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:14:46.203 [2024-07-24 10:40:12.254409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.203 [2024-07-24 10:40:12.257523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.203 [2024-07-24 10:40:12.257756] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:46.203 pt1 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:46.203 10:40:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:46.204 10:40:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:46.204 malloc2 00:14:46.204 10:40:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.204 [2024-07-24 10:40:12.740524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.204 [2024-07-24 10:40:12.740798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.204 [2024-07-24 10:40:12.740985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:46.204 [2024-07-24 10:40:12.741153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.204 [2024-07-24 10:40:12.744063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.204 [2024-07-24 10:40:12.744268] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.204 pt2 00:14:46.204 10:40:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:46.204 10:40:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:46.204 10:40:12 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:46.462 [2024-07-24 10:40:12.956840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:46.462 [2024-07-24 10:40:12.959491] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.462 [2024-07-24 10:40:12.959982] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:14:46.462 [2024-07-24 10:40:12.960164] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.462 [2024-07-24 10:40:12.960401] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:14:46.462 [2024-07-24 10:40:12.961001] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:14:46.462 [2024-07-24 10:40:12.961132] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:14:46.462 [2024-07-24 10:40:12.961471] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.462 10:40:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.719 10:40:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.719 "name": "raid_bdev1", 00:14:46.719 "uuid": "6d4b75e1-5f15-42ad-ac37-3546375e7128", 00:14:46.719 "strip_size_kb": 64, 00:14:46.719 "state": "online", 00:14:46.719 "raid_level": "concat", 00:14:46.719 "superblock": true, 00:14:46.719 "num_base_bdevs": 2, 00:14:46.719 "num_base_bdevs_discovered": 2, 00:14:46.719 "num_base_bdevs_operational": 2, 00:14:46.719 "base_bdevs_list": [ 00:14:46.719 { 00:14:46.719 "name": "pt1", 00:14:46.719 "uuid": "4433d4cc-1e0d-556d-b2e4-e528c5847b94", 00:14:46.719 "is_configured": true, 00:14:46.719 "data_offset": 2048, 00:14:46.719 "data_size": 63488 00:14:46.719 }, 00:14:46.719 { 00:14:46.719 "name": "pt2", 00:14:46.719 "uuid": "3c68a9df-c111-56c6-933c-2321b367d111", 00:14:46.719 "is_configured": true, 00:14:46.719 "data_offset": 2048, 00:14:46.719 "data_size": 63488 00:14:46.719 } 00:14:46.719 ] 00:14:46.719 }' 00:14:46.719 10:40:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.719 10:40:13 -- common/autotest_common.sh@10 -- # set +x 00:14:47.284 10:40:13 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:47.284 10:40:13 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:47.543 [2024-07-24 10:40:14.101966] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.543 10:40:14 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6d4b75e1-5f15-42ad-ac37-3546375e7128 00:14:47.543 10:40:14 -- bdev/bdev_raid.sh@380 -- # '[' -z 6d4b75e1-5f15-42ad-ac37-3546375e7128 ']' 00:14:47.543 10:40:14 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:47.801 [2024-07-24 10:40:14.377798] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:47.801 [2024-07-24 10:40:14.378143] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:47.801 [2024-07-24 10:40:14.378434] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.801 [2024-07-24 10:40:14.378623] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.801 [2024-07-24 10:40:14.378733] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:14:47.801 10:40:14 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.801 10:40:14 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:48.060 10:40:14 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:48.060 10:40:14 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:48.060 10:40:14 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.060 10:40:14 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:48.318 10:40:14 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:48.318 10:40:14 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:48.578 10:40:15 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:48.578 10:40:15 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:48.837 10:40:15 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:48.837 10:40:15 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:48.837 10:40:15 -- common/autotest_common.sh@640 -- # local es=0 00:14:48.837 10:40:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:48.837 10:40:15 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.837 10:40:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.837 10:40:15 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.837 10:40:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.837 10:40:15 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.837 10:40:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:48.837 10:40:15 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:48.837 10:40:15 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:48.837 10:40:15 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:49.096 [2024-07-24 10:40:15.606033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:49.096 [2024-07-24 10:40:15.608584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:49.096 [2024-07-24 10:40:15.608846] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:49.096 [2024-07-24 10:40:15.609087] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:49.096 [2024-07-24 10:40:15.609248] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.096 [2024-07-24 10:40:15.609353] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:14:49.096 request: 00:14:49.096 { 00:14:49.096 "name": "raid_bdev1", 00:14:49.096 "raid_level": "concat", 00:14:49.096 "base_bdevs": [ 00:14:49.096 "malloc1", 00:14:49.096 "malloc2" 00:14:49.096 ], 00:14:49.096 "superblock": false, 00:14:49.096 "strip_size_kb": 64, 00:14:49.096 "method": "bdev_raid_create", 00:14:49.096 "req_id": 1 00:14:49.096 } 00:14:49.096 Got JSON-RPC error response 00:14:49.096 response: 00:14:49.096 { 00:14:49.096 "code": -17, 00:14:49.096 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:49.096 } 00:14:49.096 10:40:15 -- common/autotest_common.sh@643 -- # es=1 00:14:49.096 10:40:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:49.097 10:40:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:49.097 10:40:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:49.097 10:40:15 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.097 10:40:15 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:49.354 10:40:15 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:49.354 10:40:15 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:49.354 10:40:15 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:49.614 [2024-07-24 10:40:16.126255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:49.614 [2024-07-24 10:40:16.126590] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.614 [2024-07-24 10:40:16.126781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:49.614 [2024-07-24 10:40:16.126929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.614 [2024-07-24 10:40:16.129736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.614 [2024-07-24 10:40:16.129926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:49.614 [2024-07-24 10:40:16.130157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:49.614 [2024-07-24 10:40:16.130337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:49.614 pt1 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.614 10:40:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.873 10:40:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.873 "name": "raid_bdev1", 00:14:49.873 "uuid": "6d4b75e1-5f15-42ad-ac37-3546375e7128", 00:14:49.873 "strip_size_kb": 64, 00:14:49.873 "state": "configuring", 00:14:49.873 "raid_level": "concat", 00:14:49.873 "superblock": true, 00:14:49.873 "num_base_bdevs": 2, 00:14:49.873 "num_base_bdevs_discovered": 1, 00:14:49.873 "num_base_bdevs_operational": 2, 00:14:49.873 "base_bdevs_list": [ 00:14:49.873 { 00:14:49.873 "name": "pt1", 00:14:49.873 "uuid": "4433d4cc-1e0d-556d-b2e4-e528c5847b94", 00:14:49.873 "is_configured": true, 00:14:49.873 "data_offset": 2048, 00:14:49.873 "data_size": 63488 00:14:49.873 }, 00:14:49.873 { 00:14:49.873 "name": null, 00:14:49.873 "uuid": "3c68a9df-c111-56c6-933c-2321b367d111", 00:14:49.873 "is_configured": false, 00:14:49.873 "data_offset": 2048, 00:14:49.873 "data_size": 63488 00:14:49.873 } 00:14:49.873 ] 00:14:49.873 }' 00:14:49.873 10:40:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.873 10:40:16 -- common/autotest_common.sh@10 -- # set +x 00:14:50.445 10:40:17 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:50.445 10:40:17 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:50.445 10:40:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:50.445 10:40:17 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:50.705 [2024-07-24 10:40:17.223016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:50.705 [2024-07-24 10:40:17.223348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.705 [2024-07-24 10:40:17.223521] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:14:50.705 [2024-07-24 10:40:17.223668] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.705 [2024-07-24 10:40:17.224325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.705 [2024-07-24 10:40:17.224505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:50.705 [2024-07-24 10:40:17.224740] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:50.705 [2024-07-24 10:40:17.224883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:50.705 [2024-07-24 10:40:17.225142] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:50.705 [2024-07-24 10:40:17.225250] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:50.705 [2024-07-24 10:40:17.225473] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:50.705 [2024-07-24 10:40:17.225972] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:50.705 [2024-07-24 10:40:17.226107] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:50.705 [2024-07-24 10:40:17.226311] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.705 pt2 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.705 10:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.964 10:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.964 "name": "raid_bdev1", 00:14:50.964 "uuid": "6d4b75e1-5f15-42ad-ac37-3546375e7128", 00:14:50.964 "strip_size_kb": 64, 00:14:50.964 "state": "online", 00:14:50.964 "raid_level": "concat", 00:14:50.964 "superblock": true, 00:14:50.964 "num_base_bdevs": 2, 00:14:50.964 "num_base_bdevs_discovered": 2, 00:14:50.964 "num_base_bdevs_operational": 2, 00:14:50.964 "base_bdevs_list": [ 00:14:50.964 { 00:14:50.964 "name": "pt1", 00:14:50.964 "uuid": "4433d4cc-1e0d-556d-b2e4-e528c5847b94", 00:14:50.964 "is_configured": true, 00:14:50.964 "data_offset": 2048, 00:14:50.964 "data_size": 63488 00:14:50.964 }, 00:14:50.964 { 00:14:50.964 "name": "pt2", 00:14:50.964 "uuid": "3c68a9df-c111-56c6-933c-2321b367d111", 00:14:50.964 "is_configured": true, 00:14:50.964 "data_offset": 2048, 00:14:50.964 "data_size": 63488 00:14:50.964 } 00:14:50.964 ] 00:14:50.964 }' 00:14:50.964 10:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.964 10:40:17 -- common/autotest_common.sh@10 -- # set +x 00:14:51.532 10:40:18 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:51.532 10:40:18 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:51.790 [2024-07-24 10:40:18.311668] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:51.790 10:40:18 -- bdev/bdev_raid.sh@430 -- # '[' 6d4b75e1-5f15-42ad-ac37-3546375e7128 '!=' 6d4b75e1-5f15-42ad-ac37-3546375e7128 ']' 00:14:51.790 10:40:18 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:51.790 10:40:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:51.790 10:40:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:51.790 10:40:18 -- bdev/bdev_raid.sh@511 -- # killprocess 124470 00:14:51.790 10:40:18 -- common/autotest_common.sh@926 -- # '[' -z 124470 ']' 00:14:51.790 10:40:18 -- common/autotest_common.sh@930 -- # kill -0 124470 00:14:51.790 10:40:18 -- common/autotest_common.sh@931 -- # uname 00:14:51.790 10:40:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:51.790 10:40:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124470 00:14:51.790 killing process with pid 124470 00:14:51.790 10:40:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:51.790 10:40:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:51.790 10:40:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124470' 00:14:51.790 10:40:18 -- common/autotest_common.sh@945 -- # kill 124470 00:14:51.790 10:40:18 -- common/autotest_common.sh@950 -- # wait 124470 00:14:51.790 [2024-07-24 10:40:18.360003] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.790 [2024-07-24 10:40:18.360092] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.790 [2024-07-24 10:40:18.360185] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.790 [2024-07-24 10:40:18.360196] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:51.790 [2024-07-24 10:40:18.391573] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.049 10:40:18 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:52.049 00:14:52.049 real 0m7.910s 00:14:52.049 user 0m14.075s 00:14:52.049 sys 0m1.170s 00:14:52.049 ************************************ 00:14:52.049 END TEST raid_superblock_test 00:14:52.049 ************************************ 00:14:52.049 10:40:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:52.049 10:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:52.308 10:40:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:52.308 10:40:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:52.308 10:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:52.308 ************************************ 00:14:52.308 START TEST raid_state_function_test 00:14:52.308 ************************************ 00:14:52.308 10:40:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=124715 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124715' 00:14:52.308 Process raid pid: 124715 00:14:52.308 10:40:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124715 /var/tmp/spdk-raid.sock 00:14:52.308 10:40:18 -- common/autotest_common.sh@819 -- # '[' -z 124715 ']' 00:14:52.308 10:40:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:52.308 10:40:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.308 10:40:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:52.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:52.308 10:40:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.308 10:40:18 -- common/autotest_common.sh@10 -- # set +x 00:14:52.308 [2024-07-24 10:40:18.817236] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:52.308 [2024-07-24 10:40:18.817681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.308 [2024-07-24 10:40:18.956911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.567 [2024-07-24 10:40:19.075798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.567 [2024-07-24 10:40:19.152349] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.133 10:40:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.133 10:40:19 -- common/autotest_common.sh@852 -- # return 0 00:14:53.133 10:40:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:53.391 [2024-07-24 10:40:20.056321] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.391 [2024-07-24 10:40:20.056836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.391 [2024-07-24 10:40:20.057001] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.391 [2024-07-24 10:40:20.057086] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.650 "name": "Existed_Raid", 00:14:53.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.650 "strip_size_kb": 0, 00:14:53.650 "state": "configuring", 00:14:53.650 "raid_level": "raid1", 00:14:53.650 "superblock": false, 00:14:53.650 "num_base_bdevs": 2, 00:14:53.650 "num_base_bdevs_discovered": 0, 00:14:53.650 "num_base_bdevs_operational": 2, 00:14:53.650 "base_bdevs_list": [ 00:14:53.650 { 00:14:53.650 "name": "BaseBdev1", 00:14:53.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.650 "is_configured": false, 00:14:53.650 "data_offset": 0, 00:14:53.650 "data_size": 0 00:14:53.650 }, 00:14:53.650 { 00:14:53.650 "name": "BaseBdev2", 00:14:53.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.650 "is_configured": false, 00:14:53.650 "data_offset": 0, 00:14:53.650 "data_size": 0 00:14:53.650 } 00:14:53.650 ] 00:14:53.650 }' 00:14:53.650 10:40:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.650 10:40:20 -- common/autotest_common.sh@10 -- # set +x 00:14:54.584 10:40:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:54.584 [2024-07-24 10:40:21.136402] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.584 [2024-07-24 10:40:21.136693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:14:54.584 10:40:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:54.842 [2024-07-24 10:40:21.344533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.842 [2024-07-24 10:40:21.344957] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.842 [2024-07-24 10:40:21.345128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.842 [2024-07-24 10:40:21.345274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.842 10:40:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:55.100 [2024-07-24 10:40:21.579595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.100 BaseBdev1 00:14:55.100 10:40:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:55.100 10:40:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:55.100 10:40:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:55.100 10:40:21 -- common/autotest_common.sh@889 -- # local i 00:14:55.100 10:40:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:55.100 10:40:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:55.100 10:40:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.358 10:40:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:55.617 [ 00:14:55.617 { 00:14:55.617 "name": "BaseBdev1", 00:14:55.617 "aliases": [ 00:14:55.617 "d2c98a1b-a02f-42bd-84a9-bcb93a2db450" 00:14:55.617 ], 00:14:55.617 "product_name": "Malloc disk", 00:14:55.617 "block_size": 512, 00:14:55.617 "num_blocks": 65536, 00:14:55.617 "uuid": "d2c98a1b-a02f-42bd-84a9-bcb93a2db450", 00:14:55.617 "assigned_rate_limits": { 00:14:55.617 "rw_ios_per_sec": 0, 00:14:55.617 "rw_mbytes_per_sec": 0, 00:14:55.617 "r_mbytes_per_sec": 0, 00:14:55.617 "w_mbytes_per_sec": 0 00:14:55.617 }, 00:14:55.617 "claimed": true, 00:14:55.617 "claim_type": "exclusive_write", 00:14:55.617 "zoned": false, 00:14:55.617 "supported_io_types": { 00:14:55.617 "read": true, 00:14:55.617 "write": true, 00:14:55.617 "unmap": true, 00:14:55.617 "write_zeroes": true, 00:14:55.617 "flush": true, 00:14:55.617 "reset": true, 00:14:55.617 "compare": false, 00:14:55.617 "compare_and_write": false, 00:14:55.617 "abort": true, 00:14:55.617 "nvme_admin": false, 00:14:55.617 "nvme_io": false 00:14:55.617 }, 00:14:55.617 "memory_domains": [ 00:14:55.617 { 00:14:55.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.617 "dma_device_type": 2 00:14:55.617 } 00:14:55.617 ], 00:14:55.617 "driver_specific": {} 00:14:55.617 } 00:14:55.617 ] 00:14:55.617 10:40:22 -- common/autotest_common.sh@895 -- # return 0 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.617 10:40:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.875 10:40:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.876 "name": "Existed_Raid", 00:14:55.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.876 "strip_size_kb": 0, 00:14:55.876 "state": "configuring", 00:14:55.876 "raid_level": "raid1", 00:14:55.876 "superblock": false, 00:14:55.876 "num_base_bdevs": 2, 00:14:55.876 "num_base_bdevs_discovered": 1, 00:14:55.876 "num_base_bdevs_operational": 2, 00:14:55.876 "base_bdevs_list": [ 00:14:55.876 { 00:14:55.876 "name": "BaseBdev1", 00:14:55.876 "uuid": "d2c98a1b-a02f-42bd-84a9-bcb93a2db450", 00:14:55.876 "is_configured": true, 00:14:55.876 "data_offset": 0, 00:14:55.876 "data_size": 65536 00:14:55.876 }, 00:14:55.876 { 00:14:55.876 "name": "BaseBdev2", 00:14:55.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.876 "is_configured": false, 00:14:55.876 "data_offset": 0, 00:14:55.876 "data_size": 0 00:14:55.876 } 00:14:55.876 ] 00:14:55.876 }' 00:14:55.876 10:40:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.876 10:40:22 -- common/autotest_common.sh@10 -- # set +x 00:14:56.443 10:40:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.701 [2024-07-24 10:40:23.304183] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.701 [2024-07-24 10:40:23.304640] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:14:56.701 10:40:23 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:56.701 10:40:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:56.959 [2024-07-24 10:40:23.568371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.959 [2024-07-24 10:40:23.571055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.959 [2024-07-24 10:40:23.571293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.959 10:40:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.218 10:40:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.218 "name": "Existed_Raid", 00:14:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.218 "strip_size_kb": 0, 00:14:57.218 "state": "configuring", 00:14:57.218 "raid_level": "raid1", 00:14:57.218 "superblock": false, 00:14:57.218 "num_base_bdevs": 2, 00:14:57.218 "num_base_bdevs_discovered": 1, 00:14:57.218 "num_base_bdevs_operational": 2, 00:14:57.218 "base_bdevs_list": [ 00:14:57.218 { 00:14:57.218 "name": "BaseBdev1", 00:14:57.218 "uuid": "d2c98a1b-a02f-42bd-84a9-bcb93a2db450", 00:14:57.218 "is_configured": true, 00:14:57.218 "data_offset": 0, 00:14:57.218 "data_size": 65536 00:14:57.218 }, 00:14:57.218 { 00:14:57.218 "name": "BaseBdev2", 00:14:57.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.218 "is_configured": false, 00:14:57.218 "data_offset": 0, 00:14:57.218 "data_size": 0 00:14:57.218 } 00:14:57.218 ] 00:14:57.218 }' 00:14:57.218 10:40:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.218 10:40:23 -- common/autotest_common.sh@10 -- # set +x 00:14:58.153 10:40:24 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:58.153 [2024-07-24 10:40:24.777279] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:58.153 [2024-07-24 10:40:24.777723] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:14:58.153 [2024-07-24 10:40:24.777884] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:58.153 [2024-07-24 10:40:24.778271] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000001f80 00:14:58.153 [2024-07-24 10:40:24.778978] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:14:58.153 [2024-07-24 10:40:24.779113] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:14:58.153 [2024-07-24 10:40:24.779600] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.153 BaseBdev2 00:14:58.153 10:40:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:58.153 10:40:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:58.153 10:40:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:58.153 10:40:24 -- common/autotest_common.sh@889 -- # local i 00:14:58.153 10:40:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:58.153 10:40:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:58.153 10:40:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.423 10:40:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:58.695 [ 00:14:58.695 { 00:14:58.695 "name": "BaseBdev2", 00:14:58.695 "aliases": [ 00:14:58.695 "47c41e2f-3eaa-41af-8216-a99fdbcfd0dd" 00:14:58.695 ], 00:14:58.695 "product_name": "Malloc disk", 00:14:58.695 "block_size": 512, 00:14:58.695 "num_blocks": 65536, 00:14:58.695 "uuid": "47c41e2f-3eaa-41af-8216-a99fdbcfd0dd", 00:14:58.695 "assigned_rate_limits": { 00:14:58.695 "rw_ios_per_sec": 0, 00:14:58.695 "rw_mbytes_per_sec": 0, 00:14:58.695 "r_mbytes_per_sec": 0, 00:14:58.695 "w_mbytes_per_sec": 0 00:14:58.695 }, 00:14:58.695 "claimed": true, 00:14:58.695 "claim_type": "exclusive_write", 00:14:58.695 "zoned": false, 00:14:58.695 "supported_io_types": { 00:14:58.695 "read": true, 00:14:58.695 "write": true, 00:14:58.695 "unmap": true, 00:14:58.695 "write_zeroes": true, 00:14:58.695 "flush": true, 00:14:58.695 "reset": true, 00:14:58.695 "compare": false, 00:14:58.695 "compare_and_write": false, 00:14:58.695 "abort": true, 00:14:58.695 "nvme_admin": false, 00:14:58.695 "nvme_io": false 00:14:58.695 }, 00:14:58.695 "memory_domains": [ 00:14:58.695 { 00:14:58.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.695 "dma_device_type": 2 00:14:58.695 } 00:14:58.695 ], 00:14:58.695 "driver_specific": {} 00:14:58.695 } 00:14:58.695 ] 00:14:58.695 10:40:25 -- common/autotest_common.sh@895 -- # return 0 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.695 10:40:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.953 10:40:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:58.953 "name": "Existed_Raid", 00:14:58.953 "uuid": "0599f8b5-2798-43df-8572-600c7ab41289", 00:14:58.953 "strip_size_kb": 0, 00:14:58.953 "state": "online", 00:14:58.953 "raid_level": "raid1", 00:14:58.953 "superblock": false, 00:14:58.953 "num_base_bdevs": 2, 00:14:58.953 "num_base_bdevs_discovered": 2, 00:14:58.953 "num_base_bdevs_operational": 2, 00:14:58.953 "base_bdevs_list": [ 00:14:58.953 { 00:14:58.953 "name": "BaseBdev1", 00:14:58.953 "uuid": "d2c98a1b-a02f-42bd-84a9-bcb93a2db450", 00:14:58.953 "is_configured": true, 00:14:58.953 "data_offset": 0, 00:14:58.953 "data_size": 65536 00:14:58.953 }, 00:14:58.953 { 00:14:58.953 "name": "BaseBdev2", 00:14:58.953 "uuid": "47c41e2f-3eaa-41af-8216-a99fdbcfd0dd", 00:14:58.953 "is_configured": true, 00:14:58.953 "data_offset": 0, 00:14:58.953 "data_size": 65536 00:14:58.953 } 00:14:58.954 ] 00:14:58.954 }' 00:14:58.954 10:40:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:58.954 10:40:25 -- common/autotest_common.sh@10 -- # set +x 00:14:59.520 10:40:26 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:59.778 [2024-07-24 10:40:26.401900] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:59.778 10:40:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:59.779 10:40:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.779 10:40:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.779 10:40:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.779 10:40:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.779 10:40:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.779 10:40:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.345 10:40:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.345 "name": "Existed_Raid", 00:15:00.345 "uuid": "0599f8b5-2798-43df-8572-600c7ab41289", 00:15:00.345 "strip_size_kb": 0, 00:15:00.345 "state": "online", 00:15:00.345 "raid_level": "raid1", 00:15:00.345 "superblock": false, 00:15:00.345 "num_base_bdevs": 2, 00:15:00.345 "num_base_bdevs_discovered": 1, 00:15:00.345 "num_base_bdevs_operational": 1, 00:15:00.345 "base_bdevs_list": [ 00:15:00.345 { 00:15:00.345 "name": null, 00:15:00.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.345 "is_configured": false, 00:15:00.345 "data_offset": 0, 00:15:00.345 "data_size": 65536 00:15:00.345 }, 00:15:00.345 { 00:15:00.345 "name": "BaseBdev2", 00:15:00.345 "uuid": "47c41e2f-3eaa-41af-8216-a99fdbcfd0dd", 00:15:00.345 "is_configured": true, 00:15:00.345 "data_offset": 0, 00:15:00.345 "data_size": 65536 00:15:00.345 } 00:15:00.345 ] 00:15:00.345 }' 00:15:00.345 10:40:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.345 10:40:26 -- common/autotest_common.sh@10 -- # set +x 00:15:00.910 10:40:27 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:00.911 10:40:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:00.911 10:40:27 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.911 10:40:27 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:01.169 10:40:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:01.169 10:40:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:01.169 10:40:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:01.428 [2024-07-24 10:40:27.924410] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:01.428 [2024-07-24 10:40:27.924778] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.428 [2024-07-24 10:40:27.925025] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.428 [2024-07-24 10:40:27.939993] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.428 [2024-07-24 10:40:27.940289] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:01.428 10:40:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:01.428 10:40:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:01.428 10:40:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.428 10:40:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:01.686 10:40:28 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:01.686 10:40:28 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:01.686 10:40:28 -- bdev/bdev_raid.sh@287 -- # killprocess 124715 00:15:01.686 10:40:28 -- common/autotest_common.sh@926 -- # '[' -z 124715 ']' 00:15:01.686 10:40:28 -- common/autotest_common.sh@930 -- # kill -0 124715 00:15:01.686 10:40:28 -- common/autotest_common.sh@931 -- # uname 00:15:01.687 10:40:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:01.687 10:40:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124715 00:15:01.687 10:40:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:01.687 10:40:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:01.687 10:40:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124715' 00:15:01.687 killing process with pid 124715 00:15:01.687 10:40:28 -- common/autotest_common.sh@945 -- # kill 124715 00:15:01.687 10:40:28 -- common/autotest_common.sh@950 -- # wait 124715 00:15:01.687 [2024-07-24 10:40:28.206297] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.687 [2024-07-24 10:40:28.206430] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.945 ************************************ 00:15:01.945 END TEST raid_state_function_test 00:15:01.945 ************************************ 00:15:01.945 10:40:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:01.945 00:15:01.945 real 0m9.754s 00:15:01.945 user 0m17.604s 00:15:01.945 sys 0m1.296s 00:15:01.945 10:40:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.945 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:15:01.945 10:40:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:01.945 10:40:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:01.945 10:40:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:01.946 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:15:01.946 ************************************ 00:15:01.946 START TEST raid_state_function_test_sb 00:15:01.946 ************************************ 00:15:01.946 10:40:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=125037 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125037' 00:15:01.946 Process raid pid: 125037 00:15:01.946 10:40:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125037 /var/tmp/spdk-raid.sock 00:15:01.946 10:40:28 -- common/autotest_common.sh@819 -- # '[' -z 125037 ']' 00:15:01.946 10:40:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:01.946 10:40:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:01.946 10:40:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:01.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:01.946 10:40:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:01.946 10:40:28 -- common/autotest_common.sh@10 -- # set +x 00:15:02.204 [2024-07-24 10:40:28.631989] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:02.204 [2024-07-24 10:40:28.632506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.204 [2024-07-24 10:40:28.778726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.475 [2024-07-24 10:40:28.889068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.475 [2024-07-24 10:40:28.962082] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.054 10:40:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:03.055 10:40:29 -- common/autotest_common.sh@852 -- # return 0 00:15:03.055 10:40:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:03.313 [2024-07-24 10:40:29.842719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.313 [2024-07-24 10:40:29.843129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.313 [2024-07-24 10:40:29.843257] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.313 [2024-07-24 10:40:29.843322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.313 10:40:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:03.313 10:40:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.313 10:40:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.313 10:40:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:03.313 10:40:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.314 10:40:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.573 10:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.573 "name": "Existed_Raid", 00:15:03.573 "uuid": "c5b9f692-0487-49b2-b98d-dc36044c8175", 00:15:03.573 "strip_size_kb": 0, 00:15:03.573 "state": "configuring", 00:15:03.573 "raid_level": "raid1", 00:15:03.573 "superblock": true, 00:15:03.573 "num_base_bdevs": 2, 00:15:03.573 "num_base_bdevs_discovered": 0, 00:15:03.573 "num_base_bdevs_operational": 2, 00:15:03.573 "base_bdevs_list": [ 00:15:03.573 { 00:15:03.573 "name": "BaseBdev1", 00:15:03.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.573 "is_configured": false, 00:15:03.573 "data_offset": 0, 00:15:03.573 "data_size": 0 00:15:03.573 }, 00:15:03.573 { 00:15:03.573 "name": "BaseBdev2", 00:15:03.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.573 "is_configured": false, 00:15:03.573 "data_offset": 0, 00:15:03.573 "data_size": 0 00:15:03.573 } 00:15:03.573 ] 00:15:03.573 }' 00:15:03.573 10:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.573 10:40:30 -- common/autotest_common.sh@10 -- # set +x 00:15:04.141 10:40:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.400 [2024-07-24 10:40:30.942897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.400 [2024-07-24 10:40:30.943380] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:04.400 10:40:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:04.659 [2024-07-24 10:40:31.219071] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.659 [2024-07-24 10:40:31.219523] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.659 [2024-07-24 10:40:31.219670] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.659 [2024-07-24 10:40:31.219741] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.659 10:40:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.918 [2024-07-24 10:40:31.497622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.918 BaseBdev1 00:15:04.918 10:40:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:04.918 10:40:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:04.918 10:40:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:04.918 10:40:31 -- common/autotest_common.sh@889 -- # local i 00:15:04.918 10:40:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:04.918 10:40:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:04.918 10:40:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.176 10:40:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.434 [ 00:15:05.434 { 00:15:05.434 "name": "BaseBdev1", 00:15:05.434 "aliases": [ 00:15:05.434 "02536b58-49e9-47c8-a9ae-306f4679cb19" 00:15:05.434 ], 00:15:05.434 "product_name": "Malloc disk", 00:15:05.435 "block_size": 512, 00:15:05.435 "num_blocks": 65536, 00:15:05.435 "uuid": "02536b58-49e9-47c8-a9ae-306f4679cb19", 00:15:05.435 "assigned_rate_limits": { 00:15:05.435 "rw_ios_per_sec": 0, 00:15:05.435 "rw_mbytes_per_sec": 0, 00:15:05.435 "r_mbytes_per_sec": 0, 00:15:05.435 "w_mbytes_per_sec": 0 00:15:05.435 }, 00:15:05.435 "claimed": true, 00:15:05.435 "claim_type": "exclusive_write", 00:15:05.435 "zoned": false, 00:15:05.435 "supported_io_types": { 00:15:05.435 "read": true, 00:15:05.435 "write": true, 00:15:05.435 "unmap": true, 00:15:05.435 "write_zeroes": true, 00:15:05.435 "flush": true, 00:15:05.435 "reset": true, 00:15:05.435 "compare": false, 00:15:05.435 "compare_and_write": false, 00:15:05.435 "abort": true, 00:15:05.435 "nvme_admin": false, 00:15:05.435 "nvme_io": false 00:15:05.435 }, 00:15:05.435 "memory_domains": [ 00:15:05.435 { 00:15:05.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.435 "dma_device_type": 2 00:15:05.435 } 00:15:05.435 ], 00:15:05.435 "driver_specific": {} 00:15:05.435 } 00:15:05.435 ] 00:15:05.435 10:40:31 -- common/autotest_common.sh@895 -- # return 0 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.435 10:40:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.694 10:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:05.694 "name": "Existed_Raid", 00:15:05.694 "uuid": "0c65964d-f33a-4b44-8055-d7b4e4fe4616", 00:15:05.694 "strip_size_kb": 0, 00:15:05.694 "state": "configuring", 00:15:05.694 "raid_level": "raid1", 00:15:05.694 "superblock": true, 00:15:05.694 "num_base_bdevs": 2, 00:15:05.694 "num_base_bdevs_discovered": 1, 00:15:05.694 "num_base_bdevs_operational": 2, 00:15:05.694 "base_bdevs_list": [ 00:15:05.694 { 00:15:05.694 "name": "BaseBdev1", 00:15:05.694 "uuid": "02536b58-49e9-47c8-a9ae-306f4679cb19", 00:15:05.694 "is_configured": true, 00:15:05.694 "data_offset": 2048, 00:15:05.694 "data_size": 63488 00:15:05.694 }, 00:15:05.694 { 00:15:05.694 "name": "BaseBdev2", 00:15:05.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.694 "is_configured": false, 00:15:05.694 "data_offset": 0, 00:15:05.694 "data_size": 0 00:15:05.694 } 00:15:05.694 ] 00:15:05.694 }' 00:15:05.694 10:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:05.694 10:40:32 -- common/autotest_common.sh@10 -- # set +x 00:15:06.261 10:40:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:06.520 [2024-07-24 10:40:33.126079] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.520 [2024-07-24 10:40:33.126466] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:06.520 10:40:33 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:06.520 10:40:33 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:06.781 10:40:33 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:07.039 BaseBdev1 00:15:07.039 10:40:33 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:07.039 10:40:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:07.039 10:40:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:07.039 10:40:33 -- common/autotest_common.sh@889 -- # local i 00:15:07.039 10:40:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:07.039 10:40:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:07.039 10:40:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:07.297 10:40:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:07.556 [ 00:15:07.556 { 00:15:07.556 "name": "BaseBdev1", 00:15:07.556 "aliases": [ 00:15:07.556 "84013b4b-0f89-4340-87d2-b4d3b665c7c6" 00:15:07.556 ], 00:15:07.556 "product_name": "Malloc disk", 00:15:07.556 "block_size": 512, 00:15:07.556 "num_blocks": 65536, 00:15:07.556 "uuid": "84013b4b-0f89-4340-87d2-b4d3b665c7c6", 00:15:07.556 "assigned_rate_limits": { 00:15:07.556 "rw_ios_per_sec": 0, 00:15:07.556 "rw_mbytes_per_sec": 0, 00:15:07.556 "r_mbytes_per_sec": 0, 00:15:07.556 "w_mbytes_per_sec": 0 00:15:07.556 }, 00:15:07.556 "claimed": false, 00:15:07.556 "zoned": false, 00:15:07.556 "supported_io_types": { 00:15:07.556 "read": true, 00:15:07.556 "write": true, 00:15:07.556 "unmap": true, 00:15:07.556 "write_zeroes": true, 00:15:07.556 "flush": true, 00:15:07.556 "reset": true, 00:15:07.556 "compare": false, 00:15:07.556 "compare_and_write": false, 00:15:07.556 "abort": true, 00:15:07.556 "nvme_admin": false, 00:15:07.556 "nvme_io": false 00:15:07.556 }, 00:15:07.556 "memory_domains": [ 00:15:07.556 { 00:15:07.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.556 "dma_device_type": 2 00:15:07.556 } 00:15:07.556 ], 00:15:07.556 "driver_specific": {} 00:15:07.556 } 00:15:07.556 ] 00:15:07.556 10:40:34 -- common/autotest_common.sh@895 -- # return 0 00:15:07.556 10:40:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:07.815 [2024-07-24 10:40:34.455894] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.815 [2024-07-24 10:40:34.458623] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.815 [2024-07-24 10:40:34.458861] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.815 10:40:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.074 10:40:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.074 "name": "Existed_Raid", 00:15:08.074 "uuid": "15b51cc4-f751-4825-92d8-3d0c121b0d94", 00:15:08.074 "strip_size_kb": 0, 00:15:08.074 "state": "configuring", 00:15:08.074 "raid_level": "raid1", 00:15:08.074 "superblock": true, 00:15:08.074 "num_base_bdevs": 2, 00:15:08.074 "num_base_bdevs_discovered": 1, 00:15:08.074 "num_base_bdevs_operational": 2, 00:15:08.074 "base_bdevs_list": [ 00:15:08.074 { 00:15:08.074 "name": "BaseBdev1", 00:15:08.074 "uuid": "84013b4b-0f89-4340-87d2-b4d3b665c7c6", 00:15:08.074 "is_configured": true, 00:15:08.074 "data_offset": 2048, 00:15:08.074 "data_size": 63488 00:15:08.074 }, 00:15:08.074 { 00:15:08.074 "name": "BaseBdev2", 00:15:08.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.074 "is_configured": false, 00:15:08.074 "data_offset": 0, 00:15:08.074 "data_size": 0 00:15:08.074 } 00:15:08.074 ] 00:15:08.074 }' 00:15:08.074 10:40:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.074 10:40:34 -- common/autotest_common.sh@10 -- # set +x 00:15:08.642 10:40:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.209 [2024-07-24 10:40:35.592490] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.209 [2024-07-24 10:40:35.595356] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:09.209 [2024-07-24 10:40:35.595722] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:09.209 BaseBdev2 00:15:09.209 [2024-07-24 10:40:35.596324] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:09.209 [2024-07-24 10:40:35.597571] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:09.209 [2024-07-24 10:40:35.597843] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:09.209 [2024-07-24 10:40:35.598437] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.209 10:40:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:09.210 10:40:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:09.210 10:40:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:09.210 10:40:35 -- common/autotest_common.sh@889 -- # local i 00:15:09.210 10:40:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:09.210 10:40:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:09.210 10:40:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.210 10:40:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.467 [ 00:15:09.467 { 00:15:09.467 "name": "BaseBdev2", 00:15:09.467 "aliases": [ 00:15:09.467 "e558055f-a3a7-4bdc-acc6-93f73d609cf1" 00:15:09.467 ], 00:15:09.467 "product_name": "Malloc disk", 00:15:09.467 "block_size": 512, 00:15:09.467 "num_blocks": 65536, 00:15:09.467 "uuid": "e558055f-a3a7-4bdc-acc6-93f73d609cf1", 00:15:09.467 "assigned_rate_limits": { 00:15:09.467 "rw_ios_per_sec": 0, 00:15:09.467 "rw_mbytes_per_sec": 0, 00:15:09.467 "r_mbytes_per_sec": 0, 00:15:09.467 "w_mbytes_per_sec": 0 00:15:09.467 }, 00:15:09.467 "claimed": true, 00:15:09.467 "claim_type": "exclusive_write", 00:15:09.467 "zoned": false, 00:15:09.468 "supported_io_types": { 00:15:09.468 "read": true, 00:15:09.468 "write": true, 00:15:09.468 "unmap": true, 00:15:09.468 "write_zeroes": true, 00:15:09.468 "flush": true, 00:15:09.468 "reset": true, 00:15:09.468 "compare": false, 00:15:09.468 "compare_and_write": false, 00:15:09.468 "abort": true, 00:15:09.468 "nvme_admin": false, 00:15:09.468 "nvme_io": false 00:15:09.468 }, 00:15:09.468 "memory_domains": [ 00:15:09.468 { 00:15:09.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.468 "dma_device_type": 2 00:15:09.468 } 00:15:09.468 ], 00:15:09.468 "driver_specific": {} 00:15:09.468 } 00:15:09.468 ] 00:15:09.468 10:40:36 -- common/autotest_common.sh@895 -- # return 0 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.468 10:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.727 10:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.727 "name": "Existed_Raid", 00:15:09.727 "uuid": "15b51cc4-f751-4825-92d8-3d0c121b0d94", 00:15:09.727 "strip_size_kb": 0, 00:15:09.727 "state": "online", 00:15:09.727 "raid_level": "raid1", 00:15:09.727 "superblock": true, 00:15:09.727 "num_base_bdevs": 2, 00:15:09.727 "num_base_bdevs_discovered": 2, 00:15:09.727 "num_base_bdevs_operational": 2, 00:15:09.727 "base_bdevs_list": [ 00:15:09.727 { 00:15:09.727 "name": "BaseBdev1", 00:15:09.727 "uuid": "84013b4b-0f89-4340-87d2-b4d3b665c7c6", 00:15:09.727 "is_configured": true, 00:15:09.727 "data_offset": 2048, 00:15:09.727 "data_size": 63488 00:15:09.727 }, 00:15:09.727 { 00:15:09.727 "name": "BaseBdev2", 00:15:09.727 "uuid": "e558055f-a3a7-4bdc-acc6-93f73d609cf1", 00:15:09.727 "is_configured": true, 00:15:09.727 "data_offset": 2048, 00:15:09.727 "data_size": 63488 00:15:09.727 } 00:15:09.727 ] 00:15:09.727 }' 00:15:09.727 10:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.727 10:40:36 -- common/autotest_common.sh@10 -- # set +x 00:15:10.295 10:40:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.554 [2024-07-24 10:40:37.158885] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.554 10:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.813 10:40:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.813 "name": "Existed_Raid", 00:15:10.813 "uuid": "15b51cc4-f751-4825-92d8-3d0c121b0d94", 00:15:10.813 "strip_size_kb": 0, 00:15:10.813 "state": "online", 00:15:10.813 "raid_level": "raid1", 00:15:10.813 "superblock": true, 00:15:10.813 "num_base_bdevs": 2, 00:15:10.813 "num_base_bdevs_discovered": 1, 00:15:10.813 "num_base_bdevs_operational": 1, 00:15:10.813 "base_bdevs_list": [ 00:15:10.813 { 00:15:10.813 "name": null, 00:15:10.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.813 "is_configured": false, 00:15:10.813 "data_offset": 2048, 00:15:10.813 "data_size": 63488 00:15:10.813 }, 00:15:10.813 { 00:15:10.813 "name": "BaseBdev2", 00:15:10.813 "uuid": "e558055f-a3a7-4bdc-acc6-93f73d609cf1", 00:15:10.813 "is_configured": true, 00:15:10.813 "data_offset": 2048, 00:15:10.813 "data_size": 63488 00:15:10.813 } 00:15:10.813 ] 00:15:10.813 }' 00:15:10.813 10:40:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.813 10:40:37 -- common/autotest_common.sh@10 -- # set +x 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:11.748 10:40:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:12.006 [2024-07-24 10:40:38.614114] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:12.006 [2024-07-24 10:40:38.614617] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.006 [2024-07-24 10:40:38.614890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.006 [2024-07-24 10:40:38.628354] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.006 [2024-07-24 10:40:38.628599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:12.006 10:40:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:12.006 10:40:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:12.006 10:40:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.006 10:40:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:12.264 10:40:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:12.264 10:40:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:12.264 10:40:38 -- bdev/bdev_raid.sh@287 -- # killprocess 125037 00:15:12.264 10:40:38 -- common/autotest_common.sh@926 -- # '[' -z 125037 ']' 00:15:12.264 10:40:38 -- common/autotest_common.sh@930 -- # kill -0 125037 00:15:12.264 10:40:38 -- common/autotest_common.sh@931 -- # uname 00:15:12.264 10:40:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.264 10:40:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125037 00:15:12.264 10:40:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:12.264 10:40:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:12.264 10:40:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125037' 00:15:12.264 killing process with pid 125037 00:15:12.264 10:40:38 -- common/autotest_common.sh@945 -- # kill 125037 00:15:12.264 10:40:38 -- common/autotest_common.sh@950 -- # wait 125037 00:15:12.264 [2024-07-24 10:40:38.896709] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.264 [2024-07-24 10:40:38.896849] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:12.831 00:15:12.831 real 0m10.656s 00:15:12.831 user 0m19.243s 00:15:12.831 sys 0m1.453s 00:15:12.831 10:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.831 10:40:39 -- common/autotest_common.sh@10 -- # set +x 00:15:12.831 ************************************ 00:15:12.831 END TEST raid_state_function_test_sb 00:15:12.831 ************************************ 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:12.831 10:40:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:12.831 10:40:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:12.831 10:40:39 -- common/autotest_common.sh@10 -- # set +x 00:15:12.831 ************************************ 00:15:12.831 START TEST raid_superblock_test 00:15:12.831 ************************************ 00:15:12.831 10:40:39 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=125361 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125361 /var/tmp/spdk-raid.sock 00:15:12.831 10:40:39 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:12.831 10:40:39 -- common/autotest_common.sh@819 -- # '[' -z 125361 ']' 00:15:12.831 10:40:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.831 10:40:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.831 10:40:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.831 10:40:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.831 10:40:39 -- common/autotest_common.sh@10 -- # set +x 00:15:12.831 [2024-07-24 10:40:39.344514] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:12.831 [2024-07-24 10:40:39.344998] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125361 ] 00:15:12.831 [2024-07-24 10:40:39.488243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.089 [2024-07-24 10:40:39.592879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.089 [2024-07-24 10:40:39.672104] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.655 10:40:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:13.655 10:40:40 -- common/autotest_common.sh@852 -- # return 0 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:13.655 10:40:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:13.914 malloc1 00:15:13.914 10:40:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.214 [2024-07-24 10:40:40.794625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.214 [2024-07-24 10:40:40.795147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.214 [2024-07-24 10:40:40.795382] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:14.214 [2024-07-24 10:40:40.795621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.214 [2024-07-24 10:40:40.798913] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.214 [2024-07-24 10:40:40.799142] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.214 pt1 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.214 10:40:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.215 10:40:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:14.473 malloc2 00:15:14.473 10:40:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.732 [2024-07-24 10:40:41.262641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.732 [2024-07-24 10:40:41.263061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.732 [2024-07-24 10:40:41.263161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:14.732 [2024-07-24 10:40:41.263466] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.732 [2024-07-24 10:40:41.266351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.732 [2024-07-24 10:40:41.266551] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.732 pt2 00:15:14.732 10:40:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:14.732 10:40:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:14.732 10:40:41 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:14.991 [2024-07-24 10:40:41.487093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.991 [2024-07-24 10:40:41.489819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.991 [2024-07-24 10:40:41.490276] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006c80 00:15:14.991 [2024-07-24 10:40:41.490449] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.991 [2024-07-24 10:40:41.490686] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:14.991 [2024-07-24 10:40:41.491368] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006c80 00:15:14.991 [2024-07-24 10:40:41.491531] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000006c80 00:15:14.991 [2024-07-24 10:40:41.491940] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.991 10:40:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.250 10:40:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.250 "name": "raid_bdev1", 00:15:15.250 "uuid": "99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a", 00:15:15.250 "strip_size_kb": 0, 00:15:15.250 "state": "online", 00:15:15.250 "raid_level": "raid1", 00:15:15.250 "superblock": true, 00:15:15.250 "num_base_bdevs": 2, 00:15:15.250 "num_base_bdevs_discovered": 2, 00:15:15.250 "num_base_bdevs_operational": 2, 00:15:15.250 "base_bdevs_list": [ 00:15:15.250 { 00:15:15.250 "name": "pt1", 00:15:15.250 "uuid": "9dafa029-2b56-5d65-8b07-026f15853e1f", 00:15:15.250 "is_configured": true, 00:15:15.250 "data_offset": 2048, 00:15:15.250 "data_size": 63488 00:15:15.250 }, 00:15:15.250 { 00:15:15.250 "name": "pt2", 00:15:15.250 "uuid": "e7dd35df-1d25-5f5c-bd7c-5571b4b161af", 00:15:15.250 "is_configured": true, 00:15:15.250 "data_offset": 2048, 00:15:15.250 "data_size": 63488 00:15:15.250 } 00:15:15.250 ] 00:15:15.250 }' 00:15:15.250 10:40:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.250 10:40:41 -- common/autotest_common.sh@10 -- # set +x 00:15:15.818 10:40:42 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:15.818 10:40:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:16.077 [2024-07-24 10:40:42.692610] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.077 10:40:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a 00:15:16.077 10:40:42 -- bdev/bdev_raid.sh@380 -- # '[' -z 99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a ']' 00:15:16.077 10:40:42 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:16.335 [2024-07-24 10:40:42.956280] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.335 [2024-07-24 10:40:42.956579] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.335 [2024-07-24 10:40:42.956819] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.335 [2024-07-24 10:40:42.957050] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.335 [2024-07-24 10:40:42.957167] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name raid_bdev1, state offline 00:15:16.335 10:40:42 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.335 10:40:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:16.593 10:40:43 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:16.593 10:40:43 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:16.593 10:40:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.593 10:40:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:16.859 10:40:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:16.859 10:40:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:17.130 10:40:43 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:17.130 10:40:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:17.389 10:40:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:17.389 10:40:43 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:17.389 10:40:43 -- common/autotest_common.sh@640 -- # local es=0 00:15:17.389 10:40:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:17.389 10:40:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.389 10:40:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:17.389 10:40:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.389 10:40:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:17.389 10:40:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.389 10:40:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:17.389 10:40:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.389 10:40:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:17.389 10:40:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:17.648 [2024-07-24 10:40:44.116583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:17.648 [2024-07-24 10:40:44.119014] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:17.648 [2024-07-24 10:40:44.119249] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:17.648 [2024-07-24 10:40:44.119484] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:17.648 [2024-07-24 10:40:44.119684] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.648 [2024-07-24 10:40:44.119832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state configuring 00:15:17.648 request: 00:15:17.648 { 00:15:17.648 "name": "raid_bdev1", 00:15:17.648 "raid_level": "raid1", 00:15:17.648 "base_bdevs": [ 00:15:17.648 "malloc1", 00:15:17.648 "malloc2" 00:15:17.648 ], 00:15:17.648 "superblock": false, 00:15:17.648 "method": "bdev_raid_create", 00:15:17.648 "req_id": 1 00:15:17.648 } 00:15:17.648 Got JSON-RPC error response 00:15:17.648 response: 00:15:17.648 { 00:15:17.648 "code": -17, 00:15:17.648 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:17.648 } 00:15:17.648 10:40:44 -- common/autotest_common.sh@643 -- # es=1 00:15:17.648 10:40:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:17.648 10:40:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:17.648 10:40:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:17.648 10:40:44 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.648 10:40:44 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:17.907 10:40:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:17.907 10:40:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:17.907 10:40:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.907 [2024-07-24 10:40:44.584665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.907 [2024-07-24 10:40:44.585015] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.907 [2024-07-24 10:40:44.585197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:17.907 [2024-07-24 10:40:44.585425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.907 [2024-07-24 10:40:44.588211] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.907 [2024-07-24 10:40:44.588397] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.907 [2024-07-24 10:40:44.588700] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:17.907 [2024-07-24 10:40:44.588893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.907 pt1 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.166 10:40:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.424 10:40:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.424 "name": "raid_bdev1", 00:15:18.425 "uuid": "99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a", 00:15:18.425 "strip_size_kb": 0, 00:15:18.425 "state": "configuring", 00:15:18.425 "raid_level": "raid1", 00:15:18.425 "superblock": true, 00:15:18.425 "num_base_bdevs": 2, 00:15:18.425 "num_base_bdevs_discovered": 1, 00:15:18.425 "num_base_bdevs_operational": 2, 00:15:18.425 "base_bdevs_list": [ 00:15:18.425 { 00:15:18.425 "name": "pt1", 00:15:18.425 "uuid": "9dafa029-2b56-5d65-8b07-026f15853e1f", 00:15:18.425 "is_configured": true, 00:15:18.425 "data_offset": 2048, 00:15:18.425 "data_size": 63488 00:15:18.425 }, 00:15:18.425 { 00:15:18.425 "name": null, 00:15:18.425 "uuid": "e7dd35df-1d25-5f5c-bd7c-5571b4b161af", 00:15:18.425 "is_configured": false, 00:15:18.425 "data_offset": 2048, 00:15:18.425 "data_size": 63488 00:15:18.425 } 00:15:18.425 ] 00:15:18.425 }' 00:15:18.425 10:40:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.425 10:40:44 -- common/autotest_common.sh@10 -- # set +x 00:15:18.992 10:40:45 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:18.992 10:40:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:18.992 10:40:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:18.992 10:40:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.249 [2024-07-24 10:40:45.793215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.249 [2024-07-24 10:40:45.793570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.249 [2024-07-24 10:40:45.793754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:19.249 [2024-07-24 10:40:45.793907] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.249 [2024-07-24 10:40:45.794571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.249 [2024-07-24 10:40:45.794748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.250 [2024-07-24 10:40:45.794979] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:19.250 [2024-07-24 10:40:45.795143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.250 [2024-07-24 10:40:45.795402] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:19.250 [2024-07-24 10:40:45.795590] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:19.250 [2024-07-24 10:40:45.795803] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:19.250 [2024-07-24 10:40:45.796297] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:19.250 [2024-07-24 10:40:45.796433] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:19.250 [2024-07-24 10:40:45.796687] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.250 pt2 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.250 10:40:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.510 10:40:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.510 "name": "raid_bdev1", 00:15:19.510 "uuid": "99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a", 00:15:19.510 "strip_size_kb": 0, 00:15:19.510 "state": "online", 00:15:19.510 "raid_level": "raid1", 00:15:19.510 "superblock": true, 00:15:19.510 "num_base_bdevs": 2, 00:15:19.510 "num_base_bdevs_discovered": 2, 00:15:19.510 "num_base_bdevs_operational": 2, 00:15:19.510 "base_bdevs_list": [ 00:15:19.510 { 00:15:19.510 "name": "pt1", 00:15:19.510 "uuid": "9dafa029-2b56-5d65-8b07-026f15853e1f", 00:15:19.510 "is_configured": true, 00:15:19.510 "data_offset": 2048, 00:15:19.510 "data_size": 63488 00:15:19.511 }, 00:15:19.511 { 00:15:19.511 "name": "pt2", 00:15:19.511 "uuid": "e7dd35df-1d25-5f5c-bd7c-5571b4b161af", 00:15:19.511 "is_configured": true, 00:15:19.511 "data_offset": 2048, 00:15:19.511 "data_size": 63488 00:15:19.511 } 00:15:19.511 ] 00:15:19.511 }' 00:15:19.511 10:40:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.511 10:40:46 -- common/autotest_common.sh@10 -- # set +x 00:15:20.078 10:40:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:20.078 10:40:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:20.337 [2024-07-24 10:40:46.973832] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.337 10:40:46 -- bdev/bdev_raid.sh@430 -- # '[' 99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a '!=' 99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a ']' 00:15:20.337 10:40:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:20.337 10:40:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:20.337 10:40:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:20.338 10:40:46 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:20.597 [2024-07-24 10:40:47.245706] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.597 10:40:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.855 10:40:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.855 "name": "raid_bdev1", 00:15:20.855 "uuid": "99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a", 00:15:20.855 "strip_size_kb": 0, 00:15:20.855 "state": "online", 00:15:20.855 "raid_level": "raid1", 00:15:20.855 "superblock": true, 00:15:20.855 "num_base_bdevs": 2, 00:15:20.855 "num_base_bdevs_discovered": 1, 00:15:20.855 "num_base_bdevs_operational": 1, 00:15:20.855 "base_bdevs_list": [ 00:15:20.855 { 00:15:20.855 "name": null, 00:15:20.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.855 "is_configured": false, 00:15:20.855 "data_offset": 2048, 00:15:20.855 "data_size": 63488 00:15:20.855 }, 00:15:20.855 { 00:15:20.855 "name": "pt2", 00:15:20.855 "uuid": "e7dd35df-1d25-5f5c-bd7c-5571b4b161af", 00:15:20.855 "is_configured": true, 00:15:20.855 "data_offset": 2048, 00:15:20.855 "data_size": 63488 00:15:20.855 } 00:15:20.855 ] 00:15:20.855 }' 00:15:20.855 10:40:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.855 10:40:47 -- common/autotest_common.sh@10 -- # set +x 00:15:21.811 10:40:48 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:22.081 [2024-07-24 10:40:48.522015] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.081 [2024-07-24 10:40:48.522349] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.081 [2024-07-24 10:40:48.522544] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.081 [2024-07-24 10:40:48.522768] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.081 [2024-07-24 10:40:48.522891] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:22.081 10:40:48 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.081 10:40:48 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:22.341 10:40:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:22.341 10:40:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:22.341 10:40:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:22.341 10:40:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:22.341 10:40:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:22.599 10:40:49 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:22.599 10:40:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:22.599 10:40:49 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:22.599 10:40:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:22.599 10:40:49 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:22.599 10:40:49 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.857 [2024-07-24 10:40:49.314252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.857 [2024-07-24 10:40:49.314661] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.857 [2024-07-24 10:40:49.314836] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:22.857 [2024-07-24 10:40:49.314979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.857 [2024-07-24 10:40:49.317770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.857 [2024-07-24 10:40:49.318058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.857 [2024-07-24 10:40:49.318279] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:22.857 [2024-07-24 10:40:49.318434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.857 [2024-07-24 10:40:49.318716] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:22.857 [2024-07-24 10:40:49.319734] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:22.857 pt2 00:15:22.857 [2024-07-24 10:40:49.320282] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:22.857 [2024-07-24 10:40:49.321321] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:22.857 [2024-07-24 10:40:49.321626] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:22.857 [2024-07-24 10:40:49.322133] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.857 10:40:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.116 10:40:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.116 "name": "raid_bdev1", 00:15:23.116 "uuid": "99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a", 00:15:23.116 "strip_size_kb": 0, 00:15:23.116 "state": "online", 00:15:23.116 "raid_level": "raid1", 00:15:23.116 "superblock": true, 00:15:23.116 "num_base_bdevs": 2, 00:15:23.116 "num_base_bdevs_discovered": 1, 00:15:23.116 "num_base_bdevs_operational": 1, 00:15:23.116 "base_bdevs_list": [ 00:15:23.116 { 00:15:23.116 "name": null, 00:15:23.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.116 "is_configured": false, 00:15:23.116 "data_offset": 2048, 00:15:23.116 "data_size": 63488 00:15:23.116 }, 00:15:23.116 { 00:15:23.116 "name": "pt2", 00:15:23.116 "uuid": "e7dd35df-1d25-5f5c-bd7c-5571b4b161af", 00:15:23.116 "is_configured": true, 00:15:23.116 "data_offset": 2048, 00:15:23.116 "data_size": 63488 00:15:23.116 } 00:15:23.116 ] 00:15:23.116 }' 00:15:23.116 10:40:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.116 10:40:49 -- common/autotest_common.sh@10 -- # set +x 00:15:23.683 10:40:50 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:23.683 10:40:50 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:23.683 10:40:50 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:23.942 [2024-07-24 10:40:50.451187] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.942 10:40:50 -- bdev/bdev_raid.sh@506 -- # '[' 99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a '!=' 99b0c89c-7704-4ad4-a2d9-5cc4d11ff55a ']' 00:15:23.942 10:40:50 -- bdev/bdev_raid.sh@511 -- # killprocess 125361 00:15:23.942 10:40:50 -- common/autotest_common.sh@926 -- # '[' -z 125361 ']' 00:15:23.942 10:40:50 -- common/autotest_common.sh@930 -- # kill -0 125361 00:15:23.942 10:40:50 -- common/autotest_common.sh@931 -- # uname 00:15:23.942 10:40:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.942 10:40:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125361 00:15:23.942 killing process with pid 125361 00:15:23.942 10:40:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:23.942 10:40:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:23.942 10:40:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125361' 00:15:23.942 10:40:50 -- common/autotest_common.sh@945 -- # kill 125361 00:15:23.942 10:40:50 -- common/autotest_common.sh@950 -- # wait 125361 00:15:23.942 [2024-07-24 10:40:50.501919] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.942 [2024-07-24 10:40:50.502056] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.942 [2024-07-24 10:40:50.502185] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.942 [2024-07-24 10:40:50.502307] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:23.942 [2024-07-24 10:40:50.524428] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.201 ************************************ 00:15:24.201 END TEST raid_superblock_test 00:15:24.201 ************************************ 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:24.201 00:15:24.201 real 0m11.495s 00:15:24.201 user 0m21.273s 00:15:24.201 sys 0m1.441s 00:15:24.201 10:40:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.201 10:40:50 -- common/autotest_common.sh@10 -- # set +x 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:24.201 10:40:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:24.201 10:40:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:24.201 10:40:50 -- common/autotest_common.sh@10 -- # set +x 00:15:24.201 ************************************ 00:15:24.201 START TEST raid_state_function_test 00:15:24.201 ************************************ 00:15:24.201 10:40:50 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@226 -- # raid_pid=125719 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125719' 00:15:24.201 Process raid pid: 125719 00:15:24.201 10:40:50 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125719 /var/tmp/spdk-raid.sock 00:15:24.201 10:40:50 -- common/autotest_common.sh@819 -- # '[' -z 125719 ']' 00:15:24.201 10:40:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:24.201 10:40:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:24.201 10:40:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:24.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:24.201 10:40:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:24.201 10:40:50 -- common/autotest_common.sh@10 -- # set +x 00:15:24.460 [2024-07-24 10:40:50.905615] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:24.460 [2024-07-24 10:40:50.906153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.460 [2024-07-24 10:40:51.060299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.718 [2024-07-24 10:40:51.163328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.718 [2024-07-24 10:40:51.222998] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.287 10:40:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:25.287 10:40:51 -- common/autotest_common.sh@852 -- # return 0 00:15:25.287 10:40:51 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:25.546 [2024-07-24 10:40:52.082416] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.546 [2024-07-24 10:40:52.082810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.546 [2024-07-24 10:40:52.082945] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.546 [2024-07-24 10:40:52.083021] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.546 [2024-07-24 10:40:52.083163] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.546 [2024-07-24 10:40:52.083271] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.546 10:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.806 10:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.806 "name": "Existed_Raid", 00:15:25.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.806 "strip_size_kb": 64, 00:15:25.806 "state": "configuring", 00:15:25.806 "raid_level": "raid0", 00:15:25.806 "superblock": false, 00:15:25.806 "num_base_bdevs": 3, 00:15:25.806 "num_base_bdevs_discovered": 0, 00:15:25.806 "num_base_bdevs_operational": 3, 00:15:25.806 "base_bdevs_list": [ 00:15:25.806 { 00:15:25.806 "name": "BaseBdev1", 00:15:25.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.806 "is_configured": false, 00:15:25.806 "data_offset": 0, 00:15:25.806 "data_size": 0 00:15:25.806 }, 00:15:25.806 { 00:15:25.806 "name": "BaseBdev2", 00:15:25.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.806 "is_configured": false, 00:15:25.806 "data_offset": 0, 00:15:25.806 "data_size": 0 00:15:25.806 }, 00:15:25.806 { 00:15:25.806 "name": "BaseBdev3", 00:15:25.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.806 "is_configured": false, 00:15:25.806 "data_offset": 0, 00:15:25.806 "data_size": 0 00:15:25.806 } 00:15:25.806 ] 00:15:25.806 }' 00:15:25.806 10:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.806 10:40:52 -- common/autotest_common.sh@10 -- # set +x 00:15:26.747 10:40:53 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:26.747 [2024-07-24 10:40:53.334660] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.747 [2024-07-24 10:40:53.334920] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:26.748 10:40:53 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:27.007 [2024-07-24 10:40:53.646869] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.007 [2024-07-24 10:40:53.647250] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.007 [2024-07-24 10:40:53.647383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.007 [2024-07-24 10:40:53.647460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.007 [2024-07-24 10:40:53.647656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.007 [2024-07-24 10:40:53.647807] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.007 10:40:53 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.265 [2024-07-24 10:40:53.903405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.265 BaseBdev1 00:15:27.265 10:40:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:27.265 10:40:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:27.265 10:40:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:27.265 10:40:53 -- common/autotest_common.sh@889 -- # local i 00:15:27.265 10:40:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:27.265 10:40:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:27.265 10:40:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.524 10:40:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.783 [ 00:15:27.783 { 00:15:27.783 "name": "BaseBdev1", 00:15:27.783 "aliases": [ 00:15:27.783 "f3ed1f39-2645-4e84-92f0-9ffb66764c41" 00:15:27.783 ], 00:15:27.783 "product_name": "Malloc disk", 00:15:27.783 "block_size": 512, 00:15:27.783 "num_blocks": 65536, 00:15:27.783 "uuid": "f3ed1f39-2645-4e84-92f0-9ffb66764c41", 00:15:27.783 "assigned_rate_limits": { 00:15:27.783 "rw_ios_per_sec": 0, 00:15:27.783 "rw_mbytes_per_sec": 0, 00:15:27.783 "r_mbytes_per_sec": 0, 00:15:27.783 "w_mbytes_per_sec": 0 00:15:27.783 }, 00:15:27.783 "claimed": true, 00:15:27.783 "claim_type": "exclusive_write", 00:15:27.783 "zoned": false, 00:15:27.783 "supported_io_types": { 00:15:27.783 "read": true, 00:15:27.783 "write": true, 00:15:27.783 "unmap": true, 00:15:27.783 "write_zeroes": true, 00:15:27.783 "flush": true, 00:15:27.783 "reset": true, 00:15:27.783 "compare": false, 00:15:27.783 "compare_and_write": false, 00:15:27.783 "abort": true, 00:15:27.783 "nvme_admin": false, 00:15:27.783 "nvme_io": false 00:15:27.783 }, 00:15:27.783 "memory_domains": [ 00:15:27.783 { 00:15:27.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.783 "dma_device_type": 2 00:15:27.783 } 00:15:27.783 ], 00:15:27.783 "driver_specific": {} 00:15:27.783 } 00:15:27.783 ] 00:15:27.783 10:40:54 -- common/autotest_common.sh@895 -- # return 0 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.783 10:40:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.041 10:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.041 "name": "Existed_Raid", 00:15:28.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.041 "strip_size_kb": 64, 00:15:28.041 "state": "configuring", 00:15:28.041 "raid_level": "raid0", 00:15:28.041 "superblock": false, 00:15:28.041 "num_base_bdevs": 3, 00:15:28.041 "num_base_bdevs_discovered": 1, 00:15:28.041 "num_base_bdevs_operational": 3, 00:15:28.041 "base_bdevs_list": [ 00:15:28.041 { 00:15:28.041 "name": "BaseBdev1", 00:15:28.041 "uuid": "f3ed1f39-2645-4e84-92f0-9ffb66764c41", 00:15:28.041 "is_configured": true, 00:15:28.041 "data_offset": 0, 00:15:28.041 "data_size": 65536 00:15:28.041 }, 00:15:28.041 { 00:15:28.041 "name": "BaseBdev2", 00:15:28.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.041 "is_configured": false, 00:15:28.041 "data_offset": 0, 00:15:28.041 "data_size": 0 00:15:28.041 }, 00:15:28.041 { 00:15:28.041 "name": "BaseBdev3", 00:15:28.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.041 "is_configured": false, 00:15:28.041 "data_offset": 0, 00:15:28.041 "data_size": 0 00:15:28.041 } 00:15:28.041 ] 00:15:28.041 }' 00:15:28.041 10:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.041 10:40:54 -- common/autotest_common.sh@10 -- # set +x 00:15:28.608 10:40:55 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.866 [2024-07-24 10:40:55.491910] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.866 [2024-07-24 10:40:55.492284] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:28.866 10:40:55 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:28.866 10:40:55 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.124 [2024-07-24 10:40:55.712092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.124 [2024-07-24 10:40:55.714706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.124 [2024-07-24 10:40:55.714910] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.124 [2024-07-24 10:40:55.715038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.124 [2024-07-24 10:40:55.715113] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.124 10:40:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.382 10:40:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.382 "name": "Existed_Raid", 00:15:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.382 "strip_size_kb": 64, 00:15:29.382 "state": "configuring", 00:15:29.382 "raid_level": "raid0", 00:15:29.382 "superblock": false, 00:15:29.382 "num_base_bdevs": 3, 00:15:29.382 "num_base_bdevs_discovered": 1, 00:15:29.382 "num_base_bdevs_operational": 3, 00:15:29.382 "base_bdevs_list": [ 00:15:29.382 { 00:15:29.382 "name": "BaseBdev1", 00:15:29.382 "uuid": "f3ed1f39-2645-4e84-92f0-9ffb66764c41", 00:15:29.382 "is_configured": true, 00:15:29.382 "data_offset": 0, 00:15:29.382 "data_size": 65536 00:15:29.382 }, 00:15:29.382 { 00:15:29.382 "name": "BaseBdev2", 00:15:29.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.382 "is_configured": false, 00:15:29.382 "data_offset": 0, 00:15:29.382 "data_size": 0 00:15:29.382 }, 00:15:29.382 { 00:15:29.383 "name": "BaseBdev3", 00:15:29.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.383 "is_configured": false, 00:15:29.383 "data_offset": 0, 00:15:29.383 "data_size": 0 00:15:29.383 } 00:15:29.383 ] 00:15:29.383 }' 00:15:29.383 10:40:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.383 10:40:55 -- common/autotest_common.sh@10 -- # set +x 00:15:30.342 10:40:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.342 [2024-07-24 10:40:56.907682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.342 BaseBdev2 00:15:30.342 10:40:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:30.342 10:40:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:30.342 10:40:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:30.342 10:40:56 -- common/autotest_common.sh@889 -- # local i 00:15:30.342 10:40:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:30.342 10:40:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:30.342 10:40:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.600 10:40:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.859 [ 00:15:30.859 { 00:15:30.859 "name": "BaseBdev2", 00:15:30.859 "aliases": [ 00:15:30.859 "7947301d-6857-4628-a9b9-a77e93a0b1bc" 00:15:30.859 ], 00:15:30.859 "product_name": "Malloc disk", 00:15:30.859 "block_size": 512, 00:15:30.859 "num_blocks": 65536, 00:15:30.859 "uuid": "7947301d-6857-4628-a9b9-a77e93a0b1bc", 00:15:30.859 "assigned_rate_limits": { 00:15:30.859 "rw_ios_per_sec": 0, 00:15:30.859 "rw_mbytes_per_sec": 0, 00:15:30.859 "r_mbytes_per_sec": 0, 00:15:30.859 "w_mbytes_per_sec": 0 00:15:30.859 }, 00:15:30.859 "claimed": true, 00:15:30.859 "claim_type": "exclusive_write", 00:15:30.859 "zoned": false, 00:15:30.859 "supported_io_types": { 00:15:30.859 "read": true, 00:15:30.859 "write": true, 00:15:30.859 "unmap": true, 00:15:30.859 "write_zeroes": true, 00:15:30.859 "flush": true, 00:15:30.859 "reset": true, 00:15:30.859 "compare": false, 00:15:30.859 "compare_and_write": false, 00:15:30.859 "abort": true, 00:15:30.859 "nvme_admin": false, 00:15:30.859 "nvme_io": false 00:15:30.859 }, 00:15:30.859 "memory_domains": [ 00:15:30.859 { 00:15:30.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.859 "dma_device_type": 2 00:15:30.859 } 00:15:30.859 ], 00:15:30.859 "driver_specific": {} 00:15:30.859 } 00:15:30.859 ] 00:15:30.859 10:40:57 -- common/autotest_common.sh@895 -- # return 0 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.859 10:40:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.117 10:40:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.117 "name": "Existed_Raid", 00:15:31.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.118 "strip_size_kb": 64, 00:15:31.118 "state": "configuring", 00:15:31.118 "raid_level": "raid0", 00:15:31.118 "superblock": false, 00:15:31.118 "num_base_bdevs": 3, 00:15:31.118 "num_base_bdevs_discovered": 2, 00:15:31.118 "num_base_bdevs_operational": 3, 00:15:31.118 "base_bdevs_list": [ 00:15:31.118 { 00:15:31.118 "name": "BaseBdev1", 00:15:31.118 "uuid": "f3ed1f39-2645-4e84-92f0-9ffb66764c41", 00:15:31.118 "is_configured": true, 00:15:31.118 "data_offset": 0, 00:15:31.118 "data_size": 65536 00:15:31.118 }, 00:15:31.118 { 00:15:31.118 "name": "BaseBdev2", 00:15:31.118 "uuid": "7947301d-6857-4628-a9b9-a77e93a0b1bc", 00:15:31.118 "is_configured": true, 00:15:31.118 "data_offset": 0, 00:15:31.118 "data_size": 65536 00:15:31.118 }, 00:15:31.118 { 00:15:31.118 "name": "BaseBdev3", 00:15:31.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.118 "is_configured": false, 00:15:31.118 "data_offset": 0, 00:15:31.118 "data_size": 0 00:15:31.118 } 00:15:31.118 ] 00:15:31.118 }' 00:15:31.118 10:40:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.118 10:40:57 -- common/autotest_common.sh@10 -- # set +x 00:15:31.684 10:40:58 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.942 [2024-07-24 10:40:58.598542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.942 [2024-07-24 10:40:58.598896] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:15:31.942 [2024-07-24 10:40:58.598950] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:31.942 [2024-07-24 10:40:58.599217] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:15:31.942 [2024-07-24 10:40:58.599916] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:15:31.942 [2024-07-24 10:40:58.600052] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:15:31.942 [2024-07-24 10:40:58.600476] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.942 BaseBdev3 00:15:31.942 10:40:58 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:31.942 10:40:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:31.942 10:40:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:31.942 10:40:58 -- common/autotest_common.sh@889 -- # local i 00:15:31.942 10:40:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:31.942 10:40:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:31.942 10:40:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.508 10:40:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:32.508 [ 00:15:32.508 { 00:15:32.508 "name": "BaseBdev3", 00:15:32.508 "aliases": [ 00:15:32.508 "81e8db39-83af-4053-a76d-94d43e8f5442" 00:15:32.508 ], 00:15:32.508 "product_name": "Malloc disk", 00:15:32.508 "block_size": 512, 00:15:32.508 "num_blocks": 65536, 00:15:32.508 "uuid": "81e8db39-83af-4053-a76d-94d43e8f5442", 00:15:32.508 "assigned_rate_limits": { 00:15:32.508 "rw_ios_per_sec": 0, 00:15:32.508 "rw_mbytes_per_sec": 0, 00:15:32.508 "r_mbytes_per_sec": 0, 00:15:32.508 "w_mbytes_per_sec": 0 00:15:32.508 }, 00:15:32.508 "claimed": true, 00:15:32.508 "claim_type": "exclusive_write", 00:15:32.508 "zoned": false, 00:15:32.508 "supported_io_types": { 00:15:32.508 "read": true, 00:15:32.508 "write": true, 00:15:32.508 "unmap": true, 00:15:32.508 "write_zeroes": true, 00:15:32.508 "flush": true, 00:15:32.508 "reset": true, 00:15:32.508 "compare": false, 00:15:32.509 "compare_and_write": false, 00:15:32.509 "abort": true, 00:15:32.509 "nvme_admin": false, 00:15:32.509 "nvme_io": false 00:15:32.509 }, 00:15:32.509 "memory_domains": [ 00:15:32.509 { 00:15:32.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.509 "dma_device_type": 2 00:15:32.509 } 00:15:32.509 ], 00:15:32.509 "driver_specific": {} 00:15:32.509 } 00:15:32.509 ] 00:15:32.509 10:40:59 -- common/autotest_common.sh@895 -- # return 0 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.509 10:40:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.766 10:40:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.767 "name": "Existed_Raid", 00:15:32.767 "uuid": "ad5c1a8d-e173-4e63-9fd8-c64a59a02df2", 00:15:32.767 "strip_size_kb": 64, 00:15:32.767 "state": "online", 00:15:32.767 "raid_level": "raid0", 00:15:32.767 "superblock": false, 00:15:32.767 "num_base_bdevs": 3, 00:15:32.767 "num_base_bdevs_discovered": 3, 00:15:32.767 "num_base_bdevs_operational": 3, 00:15:32.767 "base_bdevs_list": [ 00:15:32.767 { 00:15:32.767 "name": "BaseBdev1", 00:15:32.767 "uuid": "f3ed1f39-2645-4e84-92f0-9ffb66764c41", 00:15:32.767 "is_configured": true, 00:15:32.767 "data_offset": 0, 00:15:32.767 "data_size": 65536 00:15:32.767 }, 00:15:32.767 { 00:15:32.767 "name": "BaseBdev2", 00:15:32.767 "uuid": "7947301d-6857-4628-a9b9-a77e93a0b1bc", 00:15:32.767 "is_configured": true, 00:15:32.767 "data_offset": 0, 00:15:32.767 "data_size": 65536 00:15:32.767 }, 00:15:32.767 { 00:15:32.767 "name": "BaseBdev3", 00:15:32.767 "uuid": "81e8db39-83af-4053-a76d-94d43e8f5442", 00:15:32.767 "is_configured": true, 00:15:32.767 "data_offset": 0, 00:15:32.767 "data_size": 65536 00:15:32.767 } 00:15:32.767 ] 00:15:32.767 }' 00:15:32.767 10:40:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.767 10:40:59 -- common/autotest_common.sh@10 -- # set +x 00:15:33.333 10:40:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:33.591 [2024-07-24 10:41:00.211205] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.591 [2024-07-24 10:41:00.211457] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.591 [2024-07-24 10:41:00.211726] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.591 10:41:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.849 10:41:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.849 "name": "Existed_Raid", 00:15:33.849 "uuid": "ad5c1a8d-e173-4e63-9fd8-c64a59a02df2", 00:15:33.849 "strip_size_kb": 64, 00:15:33.849 "state": "offline", 00:15:33.849 "raid_level": "raid0", 00:15:33.849 "superblock": false, 00:15:33.849 "num_base_bdevs": 3, 00:15:33.849 "num_base_bdevs_discovered": 2, 00:15:33.849 "num_base_bdevs_operational": 2, 00:15:33.849 "base_bdevs_list": [ 00:15:33.849 { 00:15:33.849 "name": null, 00:15:33.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.849 "is_configured": false, 00:15:33.849 "data_offset": 0, 00:15:33.849 "data_size": 65536 00:15:33.849 }, 00:15:33.849 { 00:15:33.849 "name": "BaseBdev2", 00:15:33.849 "uuid": "7947301d-6857-4628-a9b9-a77e93a0b1bc", 00:15:33.849 "is_configured": true, 00:15:33.849 "data_offset": 0, 00:15:33.849 "data_size": 65536 00:15:33.849 }, 00:15:33.849 { 00:15:33.849 "name": "BaseBdev3", 00:15:33.849 "uuid": "81e8db39-83af-4053-a76d-94d43e8f5442", 00:15:33.849 "is_configured": true, 00:15:33.849 "data_offset": 0, 00:15:33.849 "data_size": 65536 00:15:33.849 } 00:15:33.849 ] 00:15:33.849 }' 00:15:33.849 10:41:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.849 10:41:00 -- common/autotest_common.sh@10 -- # set +x 00:15:34.784 10:41:01 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:34.784 10:41:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.784 10:41:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.784 10:41:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:35.043 10:41:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:35.043 10:41:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.043 10:41:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:35.302 [2024-07-24 10:41:01.730843] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:35.302 10:41:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:35.302 10:41:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:35.302 10:41:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:35.302 10:41:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.560 10:41:02 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:35.560 10:41:02 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.560 10:41:02 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:35.819 [2024-07-24 10:41:02.316694] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.819 [2024-07-24 10:41:02.317014] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:15:35.819 10:41:02 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:35.819 10:41:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:35.819 10:41:02 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.819 10:41:02 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:36.078 10:41:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:36.078 10:41:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:36.078 10:41:02 -- bdev/bdev_raid.sh@287 -- # killprocess 125719 00:15:36.078 10:41:02 -- common/autotest_common.sh@926 -- # '[' -z 125719 ']' 00:15:36.078 10:41:02 -- common/autotest_common.sh@930 -- # kill -0 125719 00:15:36.078 10:41:02 -- common/autotest_common.sh@931 -- # uname 00:15:36.078 10:41:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:36.078 10:41:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125719 00:15:36.078 10:41:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:36.078 10:41:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:36.078 10:41:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125719' 00:15:36.078 killing process with pid 125719 00:15:36.078 10:41:02 -- common/autotest_common.sh@945 -- # kill 125719 00:15:36.078 [2024-07-24 10:41:02.666615] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.078 10:41:02 -- common/autotest_common.sh@950 -- # wait 125719 00:15:36.078 [2024-07-24 10:41:02.666869] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.336 ************************************ 00:15:36.336 END TEST raid_state_function_test 00:15:36.336 ************************************ 00:15:36.336 10:41:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:36.336 00:15:36.336 real 0m12.076s 00:15:36.336 user 0m22.053s 00:15:36.336 sys 0m1.637s 00:15:36.336 10:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:36.336 10:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:36.337 10:41:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:36.337 10:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:36.337 10:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:36.337 ************************************ 00:15:36.337 START TEST raid_state_function_test_sb 00:15:36.337 ************************************ 00:15:36.337 10:41:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=126104 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126104' 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:36.337 Process raid pid: 126104 00:15:36.337 10:41:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126104 /var/tmp/spdk-raid.sock 00:15:36.337 10:41:02 -- common/autotest_common.sh@819 -- # '[' -z 126104 ']' 00:15:36.337 10:41:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.337 10:41:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:36.337 10:41:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.337 10:41:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:36.337 10:41:02 -- common/autotest_common.sh@10 -- # set +x 00:15:36.595 [2024-07-24 10:41:03.043030] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:36.595 [2024-07-24 10:41:03.043628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.595 [2024-07-24 10:41:03.192378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.862 [2024-07-24 10:41:03.295597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.862 [2024-07-24 10:41:03.356273] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.544 10:41:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:37.544 10:41:04 -- common/autotest_common.sh@852 -- # return 0 00:15:37.544 10:41:04 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:37.802 [2024-07-24 10:41:04.228493] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.802 [2024-07-24 10:41:04.228896] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.802 [2024-07-24 10:41:04.229042] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.802 [2024-07-24 10:41:04.229112] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.802 [2024-07-24 10:41:04.229223] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.802 [2024-07-24 10:41:04.229395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.802 10:41:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.061 10:41:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:38.061 "name": "Existed_Raid", 00:15:38.061 "uuid": "f0608903-27aa-439b-8b66-33eeeeca5bc4", 00:15:38.061 "strip_size_kb": 64, 00:15:38.061 "state": "configuring", 00:15:38.061 "raid_level": "raid0", 00:15:38.061 "superblock": true, 00:15:38.061 "num_base_bdevs": 3, 00:15:38.061 "num_base_bdevs_discovered": 0, 00:15:38.061 "num_base_bdevs_operational": 3, 00:15:38.061 "base_bdevs_list": [ 00:15:38.061 { 00:15:38.061 "name": "BaseBdev1", 00:15:38.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.061 "is_configured": false, 00:15:38.061 "data_offset": 0, 00:15:38.061 "data_size": 0 00:15:38.061 }, 00:15:38.061 { 00:15:38.061 "name": "BaseBdev2", 00:15:38.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.061 "is_configured": false, 00:15:38.061 "data_offset": 0, 00:15:38.061 "data_size": 0 00:15:38.061 }, 00:15:38.061 { 00:15:38.061 "name": "BaseBdev3", 00:15:38.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.061 "is_configured": false, 00:15:38.061 "data_offset": 0, 00:15:38.061 "data_size": 0 00:15:38.061 } 00:15:38.061 ] 00:15:38.061 }' 00:15:38.061 10:41:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:38.061 10:41:04 -- common/autotest_common.sh@10 -- # set +x 00:15:38.628 10:41:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:38.886 [2024-07-24 10:41:05.480558] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.886 [2024-07-24 10:41:05.480856] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:15:38.886 10:41:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:39.144 [2024-07-24 10:41:05.760766] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.144 [2024-07-24 10:41:05.761108] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.144 [2024-07-24 10:41:05.761232] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.145 [2024-07-24 10:41:05.761314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.145 [2024-07-24 10:41:05.761475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.145 [2024-07-24 10:41:05.761551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.145 10:41:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.403 [2024-07-24 10:41:06.018228] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.403 BaseBdev1 00:15:39.403 10:41:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:39.403 10:41:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:39.403 10:41:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:39.403 10:41:06 -- common/autotest_common.sh@889 -- # local i 00:15:39.403 10:41:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:39.403 10:41:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:39.403 10:41:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.662 10:41:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.919 [ 00:15:39.919 { 00:15:39.919 "name": "BaseBdev1", 00:15:39.919 "aliases": [ 00:15:39.920 "58d3d08f-37c3-43cd-b29c-3a173899aed5" 00:15:39.920 ], 00:15:39.920 "product_name": "Malloc disk", 00:15:39.920 "block_size": 512, 00:15:39.920 "num_blocks": 65536, 00:15:39.920 "uuid": "58d3d08f-37c3-43cd-b29c-3a173899aed5", 00:15:39.920 "assigned_rate_limits": { 00:15:39.920 "rw_ios_per_sec": 0, 00:15:39.920 "rw_mbytes_per_sec": 0, 00:15:39.920 "r_mbytes_per_sec": 0, 00:15:39.920 "w_mbytes_per_sec": 0 00:15:39.920 }, 00:15:39.920 "claimed": true, 00:15:39.920 "claim_type": "exclusive_write", 00:15:39.920 "zoned": false, 00:15:39.920 "supported_io_types": { 00:15:39.920 "read": true, 00:15:39.920 "write": true, 00:15:39.920 "unmap": true, 00:15:39.920 "write_zeroes": true, 00:15:39.920 "flush": true, 00:15:39.920 "reset": true, 00:15:39.920 "compare": false, 00:15:39.920 "compare_and_write": false, 00:15:39.920 "abort": true, 00:15:39.920 "nvme_admin": false, 00:15:39.920 "nvme_io": false 00:15:39.920 }, 00:15:39.920 "memory_domains": [ 00:15:39.920 { 00:15:39.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.920 "dma_device_type": 2 00:15:39.920 } 00:15:39.920 ], 00:15:39.920 "driver_specific": {} 00:15:39.920 } 00:15:39.920 ] 00:15:39.920 10:41:06 -- common/autotest_common.sh@895 -- # return 0 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.920 10:41:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.177 10:41:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.177 "name": "Existed_Raid", 00:15:40.177 "uuid": "bffbbf41-5974-492e-b947-c7e4d301066c", 00:15:40.177 "strip_size_kb": 64, 00:15:40.177 "state": "configuring", 00:15:40.177 "raid_level": "raid0", 00:15:40.177 "superblock": true, 00:15:40.177 "num_base_bdevs": 3, 00:15:40.177 "num_base_bdevs_discovered": 1, 00:15:40.177 "num_base_bdevs_operational": 3, 00:15:40.177 "base_bdevs_list": [ 00:15:40.178 { 00:15:40.178 "name": "BaseBdev1", 00:15:40.178 "uuid": "58d3d08f-37c3-43cd-b29c-3a173899aed5", 00:15:40.178 "is_configured": true, 00:15:40.178 "data_offset": 2048, 00:15:40.178 "data_size": 63488 00:15:40.178 }, 00:15:40.178 { 00:15:40.178 "name": "BaseBdev2", 00:15:40.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.178 "is_configured": false, 00:15:40.178 "data_offset": 0, 00:15:40.178 "data_size": 0 00:15:40.178 }, 00:15:40.178 { 00:15:40.178 "name": "BaseBdev3", 00:15:40.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.178 "is_configured": false, 00:15:40.178 "data_offset": 0, 00:15:40.178 "data_size": 0 00:15:40.178 } 00:15:40.178 ] 00:15:40.178 }' 00:15:40.178 10:41:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.178 10:41:06 -- common/autotest_common.sh@10 -- # set +x 00:15:40.744 10:41:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:41.002 [2024-07-24 10:41:07.570818] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.002 [2024-07-24 10:41:07.571330] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:15:41.002 10:41:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:41.002 10:41:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.260 10:41:07 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.528 BaseBdev1 00:15:41.528 10:41:08 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:41.528 10:41:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:41.528 10:41:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:41.528 10:41:08 -- common/autotest_common.sh@889 -- # local i 00:15:41.528 10:41:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:41.528 10:41:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:41.528 10:41:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.787 10:41:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.045 [ 00:15:42.045 { 00:15:42.045 "name": "BaseBdev1", 00:15:42.045 "aliases": [ 00:15:42.045 "11781121-fe92-45ab-acff-aa77989ec8a4" 00:15:42.045 ], 00:15:42.045 "product_name": "Malloc disk", 00:15:42.045 "block_size": 512, 00:15:42.045 "num_blocks": 65536, 00:15:42.045 "uuid": "11781121-fe92-45ab-acff-aa77989ec8a4", 00:15:42.045 "assigned_rate_limits": { 00:15:42.045 "rw_ios_per_sec": 0, 00:15:42.045 "rw_mbytes_per_sec": 0, 00:15:42.045 "r_mbytes_per_sec": 0, 00:15:42.045 "w_mbytes_per_sec": 0 00:15:42.045 }, 00:15:42.045 "claimed": false, 00:15:42.045 "zoned": false, 00:15:42.045 "supported_io_types": { 00:15:42.045 "read": true, 00:15:42.045 "write": true, 00:15:42.045 "unmap": true, 00:15:42.045 "write_zeroes": true, 00:15:42.045 "flush": true, 00:15:42.045 "reset": true, 00:15:42.045 "compare": false, 00:15:42.045 "compare_and_write": false, 00:15:42.045 "abort": true, 00:15:42.045 "nvme_admin": false, 00:15:42.045 "nvme_io": false 00:15:42.045 }, 00:15:42.045 "memory_domains": [ 00:15:42.045 { 00:15:42.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.045 "dma_device_type": 2 00:15:42.045 } 00:15:42.045 ], 00:15:42.045 "driver_specific": {} 00:15:42.045 } 00:15:42.045 ] 00:15:42.045 10:41:08 -- common/autotest_common.sh@895 -- # return 0 00:15:42.045 10:41:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:42.304 [2024-07-24 10:41:08.854363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.304 [2024-07-24 10:41:08.857369] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.304 [2024-07-24 10:41:08.857617] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.304 [2024-07-24 10:41:08.857757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.304 [2024-07-24 10:41:08.857866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.304 10:41:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.562 10:41:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.562 "name": "Existed_Raid", 00:15:42.562 "uuid": "b34707f1-e940-4a2b-a168-5ee753e77d4b", 00:15:42.562 "strip_size_kb": 64, 00:15:42.562 "state": "configuring", 00:15:42.562 "raid_level": "raid0", 00:15:42.562 "superblock": true, 00:15:42.562 "num_base_bdevs": 3, 00:15:42.562 "num_base_bdevs_discovered": 1, 00:15:42.562 "num_base_bdevs_operational": 3, 00:15:42.562 "base_bdevs_list": [ 00:15:42.562 { 00:15:42.562 "name": "BaseBdev1", 00:15:42.562 "uuid": "11781121-fe92-45ab-acff-aa77989ec8a4", 00:15:42.562 "is_configured": true, 00:15:42.562 "data_offset": 2048, 00:15:42.562 "data_size": 63488 00:15:42.562 }, 00:15:42.562 { 00:15:42.562 "name": "BaseBdev2", 00:15:42.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.562 "is_configured": false, 00:15:42.562 "data_offset": 0, 00:15:42.562 "data_size": 0 00:15:42.562 }, 00:15:42.562 { 00:15:42.562 "name": "BaseBdev3", 00:15:42.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.562 "is_configured": false, 00:15:42.562 "data_offset": 0, 00:15:42.562 "data_size": 0 00:15:42.562 } 00:15:42.562 ] 00:15:42.562 }' 00:15:42.562 10:41:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.562 10:41:09 -- common/autotest_common.sh@10 -- # set +x 00:15:43.129 10:41:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.387 [2024-07-24 10:41:09.956965] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.387 BaseBdev2 00:15:43.387 10:41:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:43.387 10:41:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:43.387 10:41:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:43.387 10:41:09 -- common/autotest_common.sh@889 -- # local i 00:15:43.387 10:41:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:43.387 10:41:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:43.387 10:41:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.645 10:41:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.904 [ 00:15:43.904 { 00:15:43.904 "name": "BaseBdev2", 00:15:43.904 "aliases": [ 00:15:43.904 "18a11149-9907-4ed1-a367-e884b2f601b8" 00:15:43.904 ], 00:15:43.904 "product_name": "Malloc disk", 00:15:43.904 "block_size": 512, 00:15:43.904 "num_blocks": 65536, 00:15:43.904 "uuid": "18a11149-9907-4ed1-a367-e884b2f601b8", 00:15:43.904 "assigned_rate_limits": { 00:15:43.904 "rw_ios_per_sec": 0, 00:15:43.904 "rw_mbytes_per_sec": 0, 00:15:43.904 "r_mbytes_per_sec": 0, 00:15:43.904 "w_mbytes_per_sec": 0 00:15:43.904 }, 00:15:43.904 "claimed": true, 00:15:43.904 "claim_type": "exclusive_write", 00:15:43.904 "zoned": false, 00:15:43.904 "supported_io_types": { 00:15:43.904 "read": true, 00:15:43.904 "write": true, 00:15:43.904 "unmap": true, 00:15:43.904 "write_zeroes": true, 00:15:43.904 "flush": true, 00:15:43.904 "reset": true, 00:15:43.904 "compare": false, 00:15:43.904 "compare_and_write": false, 00:15:43.904 "abort": true, 00:15:43.904 "nvme_admin": false, 00:15:43.904 "nvme_io": false 00:15:43.904 }, 00:15:43.904 "memory_domains": [ 00:15:43.904 { 00:15:43.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.904 "dma_device_type": 2 00:15:43.904 } 00:15:43.904 ], 00:15:43.904 "driver_specific": {} 00:15:43.904 } 00:15:43.904 ] 00:15:43.904 10:41:10 -- common/autotest_common.sh@895 -- # return 0 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.904 10:41:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.162 10:41:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.162 "name": "Existed_Raid", 00:15:44.162 "uuid": "b34707f1-e940-4a2b-a168-5ee753e77d4b", 00:15:44.162 "strip_size_kb": 64, 00:15:44.162 "state": "configuring", 00:15:44.162 "raid_level": "raid0", 00:15:44.162 "superblock": true, 00:15:44.162 "num_base_bdevs": 3, 00:15:44.162 "num_base_bdevs_discovered": 2, 00:15:44.162 "num_base_bdevs_operational": 3, 00:15:44.162 "base_bdevs_list": [ 00:15:44.162 { 00:15:44.162 "name": "BaseBdev1", 00:15:44.162 "uuid": "11781121-fe92-45ab-acff-aa77989ec8a4", 00:15:44.162 "is_configured": true, 00:15:44.162 "data_offset": 2048, 00:15:44.162 "data_size": 63488 00:15:44.162 }, 00:15:44.162 { 00:15:44.162 "name": "BaseBdev2", 00:15:44.162 "uuid": "18a11149-9907-4ed1-a367-e884b2f601b8", 00:15:44.162 "is_configured": true, 00:15:44.162 "data_offset": 2048, 00:15:44.162 "data_size": 63488 00:15:44.162 }, 00:15:44.162 { 00:15:44.162 "name": "BaseBdev3", 00:15:44.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.162 "is_configured": false, 00:15:44.162 "data_offset": 0, 00:15:44.162 "data_size": 0 00:15:44.162 } 00:15:44.162 ] 00:15:44.162 }' 00:15:44.162 10:41:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.162 10:41:10 -- common/autotest_common.sh@10 -- # set +x 00:15:44.729 10:41:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.987 [2024-07-24 10:41:11.609940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.987 [2024-07-24 10:41:11.611130] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:15:44.987 [2024-07-24 10:41:11.611325] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:44.987 [2024-07-24 10:41:11.611572] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:15:44.987 [2024-07-24 10:41:11.612126] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:15:44.987 BaseBdev3 00:15:44.987 [2024-07-24 10:41:11.612306] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:15:44.987 [2024-07-24 10:41:11.612770] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.987 10:41:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:44.987 10:41:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:15:44.987 10:41:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:44.987 10:41:11 -- common/autotest_common.sh@889 -- # local i 00:15:44.987 10:41:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:44.987 10:41:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:44.987 10:41:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.246 10:41:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:45.505 [ 00:15:45.505 { 00:15:45.505 "name": "BaseBdev3", 00:15:45.505 "aliases": [ 00:15:45.505 "c6f5675e-bd7f-4c1a-9073-ed35250286f5" 00:15:45.505 ], 00:15:45.505 "product_name": "Malloc disk", 00:15:45.505 "block_size": 512, 00:15:45.505 "num_blocks": 65536, 00:15:45.505 "uuid": "c6f5675e-bd7f-4c1a-9073-ed35250286f5", 00:15:45.505 "assigned_rate_limits": { 00:15:45.505 "rw_ios_per_sec": 0, 00:15:45.505 "rw_mbytes_per_sec": 0, 00:15:45.505 "r_mbytes_per_sec": 0, 00:15:45.505 "w_mbytes_per_sec": 0 00:15:45.505 }, 00:15:45.505 "claimed": true, 00:15:45.505 "claim_type": "exclusive_write", 00:15:45.505 "zoned": false, 00:15:45.505 "supported_io_types": { 00:15:45.505 "read": true, 00:15:45.505 "write": true, 00:15:45.505 "unmap": true, 00:15:45.505 "write_zeroes": true, 00:15:45.505 "flush": true, 00:15:45.505 "reset": true, 00:15:45.505 "compare": false, 00:15:45.505 "compare_and_write": false, 00:15:45.505 "abort": true, 00:15:45.505 "nvme_admin": false, 00:15:45.505 "nvme_io": false 00:15:45.505 }, 00:15:45.505 "memory_domains": [ 00:15:45.505 { 00:15:45.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.505 "dma_device_type": 2 00:15:45.505 } 00:15:45.505 ], 00:15:45.505 "driver_specific": {} 00:15:45.505 } 00:15:45.505 ] 00:15:45.505 10:41:12 -- common/autotest_common.sh@895 -- # return 0 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.505 10:41:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.772 10:41:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.772 "name": "Existed_Raid", 00:15:45.772 "uuid": "b34707f1-e940-4a2b-a168-5ee753e77d4b", 00:15:45.772 "strip_size_kb": 64, 00:15:45.772 "state": "online", 00:15:45.772 "raid_level": "raid0", 00:15:45.772 "superblock": true, 00:15:45.772 "num_base_bdevs": 3, 00:15:45.772 "num_base_bdevs_discovered": 3, 00:15:45.772 "num_base_bdevs_operational": 3, 00:15:45.772 "base_bdevs_list": [ 00:15:45.772 { 00:15:45.772 "name": "BaseBdev1", 00:15:45.772 "uuid": "11781121-fe92-45ab-acff-aa77989ec8a4", 00:15:45.772 "is_configured": true, 00:15:45.772 "data_offset": 2048, 00:15:45.772 "data_size": 63488 00:15:45.772 }, 00:15:45.772 { 00:15:45.772 "name": "BaseBdev2", 00:15:45.772 "uuid": "18a11149-9907-4ed1-a367-e884b2f601b8", 00:15:45.772 "is_configured": true, 00:15:45.772 "data_offset": 2048, 00:15:45.772 "data_size": 63488 00:15:45.772 }, 00:15:45.772 { 00:15:45.772 "name": "BaseBdev3", 00:15:45.772 "uuid": "c6f5675e-bd7f-4c1a-9073-ed35250286f5", 00:15:45.772 "is_configured": true, 00:15:45.772 "data_offset": 2048, 00:15:45.772 "data_size": 63488 00:15:45.772 } 00:15:45.772 ] 00:15:45.772 }' 00:15:45.772 10:41:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.772 10:41:12 -- common/autotest_common.sh@10 -- # set +x 00:15:46.353 10:41:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.612 [2024-07-24 10:41:13.186588] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.612 [2024-07-24 10:41:13.186967] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.612 [2024-07-24 10:41:13.187252] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.612 10:41:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.869 10:41:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.869 "name": "Existed_Raid", 00:15:46.869 "uuid": "b34707f1-e940-4a2b-a168-5ee753e77d4b", 00:15:46.869 "strip_size_kb": 64, 00:15:46.869 "state": "offline", 00:15:46.869 "raid_level": "raid0", 00:15:46.869 "superblock": true, 00:15:46.869 "num_base_bdevs": 3, 00:15:46.869 "num_base_bdevs_discovered": 2, 00:15:46.869 "num_base_bdevs_operational": 2, 00:15:46.869 "base_bdevs_list": [ 00:15:46.869 { 00:15:46.869 "name": null, 00:15:46.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.869 "is_configured": false, 00:15:46.869 "data_offset": 2048, 00:15:46.869 "data_size": 63488 00:15:46.869 }, 00:15:46.869 { 00:15:46.869 "name": "BaseBdev2", 00:15:46.869 "uuid": "18a11149-9907-4ed1-a367-e884b2f601b8", 00:15:46.869 "is_configured": true, 00:15:46.869 "data_offset": 2048, 00:15:46.869 "data_size": 63488 00:15:46.869 }, 00:15:46.869 { 00:15:46.869 "name": "BaseBdev3", 00:15:46.869 "uuid": "c6f5675e-bd7f-4c1a-9073-ed35250286f5", 00:15:46.869 "is_configured": true, 00:15:46.869 "data_offset": 2048, 00:15:46.869 "data_size": 63488 00:15:46.869 } 00:15:46.869 ] 00:15:46.869 }' 00:15:46.869 10:41:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.869 10:41:13 -- common/autotest_common.sh@10 -- # set +x 00:15:47.435 10:41:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:47.435 10:41:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.435 10:41:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.435 10:41:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:47.693 10:41:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:47.693 10:41:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.693 10:41:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:47.951 [2024-07-24 10:41:14.574839] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.951 10:41:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:47.951 10:41:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.951 10:41:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.951 10:41:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:48.209 10:41:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:48.209 10:41:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.210 10:41:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:48.467 [2024-07-24 10:41:15.144192] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.467 [2024-07-24 10:41:15.144655] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:15:48.725 10:41:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:48.725 10:41:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:48.725 10:41:15 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.725 10:41:15 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.984 10:41:15 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:48.984 10:41:15 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:48.984 10:41:15 -- bdev/bdev_raid.sh@287 -- # killprocess 126104 00:15:48.984 10:41:15 -- common/autotest_common.sh@926 -- # '[' -z 126104 ']' 00:15:48.984 10:41:15 -- common/autotest_common.sh@930 -- # kill -0 126104 00:15:48.984 10:41:15 -- common/autotest_common.sh@931 -- # uname 00:15:48.984 10:41:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:48.984 10:41:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126104 00:15:48.984 killing process with pid 126104 00:15:48.984 10:41:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:48.984 10:41:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:48.984 10:41:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126104' 00:15:48.984 10:41:15 -- common/autotest_common.sh@945 -- # kill 126104 00:15:48.984 10:41:15 -- common/autotest_common.sh@950 -- # wait 126104 00:15:48.984 [2024-07-24 10:41:15.435952] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.984 [2024-07-24 10:41:15.436120] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:49.242 00:15:49.242 real 0m12.796s 00:15:49.242 user 0m23.362s 00:15:49.242 sys 0m1.642s 00:15:49.242 10:41:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.242 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.242 ************************************ 00:15:49.242 END TEST raid_state_function_test_sb 00:15:49.242 ************************************ 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:49.242 10:41:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:49.242 10:41:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:49.242 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.242 ************************************ 00:15:49.242 START TEST raid_superblock_test 00:15:49.242 ************************************ 00:15:49.242 10:41:15 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=126489 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126489 /var/tmp/spdk-raid.sock 00:15:49.242 10:41:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:49.242 10:41:15 -- common/autotest_common.sh@819 -- # '[' -z 126489 ']' 00:15:49.242 10:41:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:49.242 10:41:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:49.242 10:41:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:49.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:49.242 10:41:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:49.242 10:41:15 -- common/autotest_common.sh@10 -- # set +x 00:15:49.242 [2024-07-24 10:41:15.881465] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:49.242 [2024-07-24 10:41:15.881966] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126489 ] 00:15:49.502 [2024-07-24 10:41:16.028980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.502 [2024-07-24 10:41:16.129205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.502 [2024-07-24 10:41:16.185398] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.479 10:41:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:50.479 10:41:16 -- common/autotest_common.sh@852 -- # return 0 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.479 10:41:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:50.479 malloc1 00:15:50.479 10:41:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:50.737 [2024-07-24 10:41:17.387531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:50.737 [2024-07-24 10:41:17.388038] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.737 [2024-07-24 10:41:17.388255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:15:50.737 [2024-07-24 10:41:17.388481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.737 [2024-07-24 10:41:17.392283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.737 [2024-07-24 10:41:17.392524] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:50.737 pt1 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.737 10:41:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:50.996 malloc2 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.255 [2024-07-24 10:41:17.908193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.255 [2024-07-24 10:41:17.908670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.255 [2024-07-24 10:41:17.908853] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:51.255 [2024-07-24 10:41:17.909016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.255 [2024-07-24 10:41:17.911916] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.255 [2024-07-24 10:41:17.912094] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.255 pt2 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.255 10:41:17 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:51.514 malloc3 00:15:51.514 10:41:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.772 [2024-07-24 10:41:18.435720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.772 [2024-07-24 10:41:18.436151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.772 [2024-07-24 10:41:18.436251] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:51.772 [2024-07-24 10:41:18.436588] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.772 [2024-07-24 10:41:18.439760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.773 [2024-07-24 10:41:18.439953] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.773 pt3 00:15:52.032 10:41:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:52.032 10:41:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:52.032 10:41:18 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:52.032 [2024-07-24 10:41:18.704686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.032 [2024-07-24 10:41:18.708201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.032 [2024-07-24 10:41:18.708499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.032 [2024-07-24 10:41:18.709049] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:52.032 [2024-07-24 10:41:18.709223] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:52.032 [2024-07-24 10:41:18.709524] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:15:52.032 [2024-07-24 10:41:18.710234] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:52.032 [2024-07-24 10:41:18.710404] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:15:52.032 [2024-07-24 10:41:18.710777] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.292 10:41:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.551 10:41:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.551 "name": "raid_bdev1", 00:15:52.551 "uuid": "20b5e4ba-96a6-443c-9087-2efd8787bbee", 00:15:52.551 "strip_size_kb": 64, 00:15:52.551 "state": "online", 00:15:52.551 "raid_level": "raid0", 00:15:52.551 "superblock": true, 00:15:52.551 "num_base_bdevs": 3, 00:15:52.551 "num_base_bdevs_discovered": 3, 00:15:52.551 "num_base_bdevs_operational": 3, 00:15:52.551 "base_bdevs_list": [ 00:15:52.551 { 00:15:52.551 "name": "pt1", 00:15:52.551 "uuid": "3caa648e-8f4f-5d80-9933-0d70a4047d2b", 00:15:52.551 "is_configured": true, 00:15:52.551 "data_offset": 2048, 00:15:52.551 "data_size": 63488 00:15:52.551 }, 00:15:52.551 { 00:15:52.551 "name": "pt2", 00:15:52.551 "uuid": "62ed8d3c-4137-5785-ba98-dc57e1085daa", 00:15:52.551 "is_configured": true, 00:15:52.551 "data_offset": 2048, 00:15:52.551 "data_size": 63488 00:15:52.551 }, 00:15:52.551 { 00:15:52.551 "name": "pt3", 00:15:52.551 "uuid": "f9e2c76f-b96e-5a5b-a384-8f8b2613a132", 00:15:52.551 "is_configured": true, 00:15:52.551 "data_offset": 2048, 00:15:52.551 "data_size": 63488 00:15:52.551 } 00:15:52.551 ] 00:15:52.551 }' 00:15:52.551 10:41:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.551 10:41:18 -- common/autotest_common.sh@10 -- # set +x 00:15:53.117 10:41:19 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:53.117 10:41:19 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:53.375 [2024-07-24 10:41:19.897638] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.375 10:41:19 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=20b5e4ba-96a6-443c-9087-2efd8787bbee 00:15:53.375 10:41:19 -- bdev/bdev_raid.sh@380 -- # '[' -z 20b5e4ba-96a6-443c-9087-2efd8787bbee ']' 00:15:53.375 10:41:19 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:53.655 [2024-07-24 10:41:20.173434] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.655 [2024-07-24 10:41:20.173803] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.655 [2024-07-24 10:41:20.174094] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.655 [2024-07-24 10:41:20.174321] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.655 [2024-07-24 10:41:20.174462] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:15:53.655 10:41:20 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.655 10:41:20 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:53.913 10:41:20 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:53.913 10:41:20 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:53.913 10:41:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.913 10:41:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:54.171 10:41:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.171 10:41:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:54.429 10:41:20 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:54.429 10:41:20 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:54.688 10:41:21 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:54.688 10:41:21 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:54.947 10:41:21 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:54.947 10:41:21 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:54.947 10:41:21 -- common/autotest_common.sh@640 -- # local es=0 00:15:54.947 10:41:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:54.947 10:41:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.947 10:41:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:54.947 10:41:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.947 10:41:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:54.947 10:41:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.947 10:41:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:54.947 10:41:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.947 10:41:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:54.947 10:41:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:54.947 [2024-07-24 10:41:21.629710] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:55.206 [2024-07-24 10:41:21.632557] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:55.206 [2024-07-24 10:41:21.632758] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:55.206 [2024-07-24 10:41:21.632890] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:55.206 [2024-07-24 10:41:21.633198] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:55.206 [2024-07-24 10:41:21.633389] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:55.206 [2024-07-24 10:41:21.633569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:55.206 [2024-07-24 10:41:21.633692] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:15:55.206 request: 00:15:55.206 { 00:15:55.206 "name": "raid_bdev1", 00:15:55.206 "raid_level": "raid0", 00:15:55.206 "base_bdevs": [ 00:15:55.206 "malloc1", 00:15:55.206 "malloc2", 00:15:55.206 "malloc3" 00:15:55.206 ], 00:15:55.206 "superblock": false, 00:15:55.206 "strip_size_kb": 64, 00:15:55.206 "method": "bdev_raid_create", 00:15:55.206 "req_id": 1 00:15:55.206 } 00:15:55.206 Got JSON-RPC error response 00:15:55.206 response: 00:15:55.206 { 00:15:55.206 "code": -17, 00:15:55.206 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:55.206 } 00:15:55.206 10:41:21 -- common/autotest_common.sh@643 -- # es=1 00:15:55.206 10:41:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:55.206 10:41:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:55.206 10:41:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:55.206 10:41:21 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.206 10:41:21 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:55.206 10:41:21 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:55.206 10:41:21 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:55.206 10:41:21 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:55.464 [2024-07-24 10:41:22.098325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:55.464 [2024-07-24 10:41:22.098794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.464 [2024-07-24 10:41:22.099008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:55.464 [2024-07-24 10:41:22.099183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.464 [2024-07-24 10:41:22.102039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.464 [2024-07-24 10:41:22.102282] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:55.464 [2024-07-24 10:41:22.102564] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:55.464 [2024-07-24 10:41:22.102786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.464 pt1 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.464 10:41:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.723 10:41:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.723 "name": "raid_bdev1", 00:15:55.723 "uuid": "20b5e4ba-96a6-443c-9087-2efd8787bbee", 00:15:55.723 "strip_size_kb": 64, 00:15:55.723 "state": "configuring", 00:15:55.723 "raid_level": "raid0", 00:15:55.723 "superblock": true, 00:15:55.723 "num_base_bdevs": 3, 00:15:55.723 "num_base_bdevs_discovered": 1, 00:15:55.723 "num_base_bdevs_operational": 3, 00:15:55.723 "base_bdevs_list": [ 00:15:55.723 { 00:15:55.723 "name": "pt1", 00:15:55.723 "uuid": "3caa648e-8f4f-5d80-9933-0d70a4047d2b", 00:15:55.723 "is_configured": true, 00:15:55.723 "data_offset": 2048, 00:15:55.723 "data_size": 63488 00:15:55.723 }, 00:15:55.723 { 00:15:55.723 "name": null, 00:15:55.723 "uuid": "62ed8d3c-4137-5785-ba98-dc57e1085daa", 00:15:55.723 "is_configured": false, 00:15:55.723 "data_offset": 2048, 00:15:55.723 "data_size": 63488 00:15:55.723 }, 00:15:55.723 { 00:15:55.723 "name": null, 00:15:55.723 "uuid": "f9e2c76f-b96e-5a5b-a384-8f8b2613a132", 00:15:55.723 "is_configured": false, 00:15:55.723 "data_offset": 2048, 00:15:55.723 "data_size": 63488 00:15:55.723 } 00:15:55.723 ] 00:15:55.723 }' 00:15:55.723 10:41:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.723 10:41:22 -- common/autotest_common.sh@10 -- # set +x 00:15:56.656 10:41:23 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:56.656 10:41:23 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.656 [2024-07-24 10:41:23.243165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.656 [2024-07-24 10:41:23.243642] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.656 [2024-07-24 10:41:23.243856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:56.656 [2024-07-24 10:41:23.244057] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.656 [2024-07-24 10:41:23.244785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.656 [2024-07-24 10:41:23.245015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.656 [2024-07-24 10:41:23.245281] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:56.656 [2024-07-24 10:41:23.245445] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.656 pt2 00:15:56.656 10:41:23 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:56.914 [2024-07-24 10:41:23.543324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.914 10:41:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.172 10:41:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.172 "name": "raid_bdev1", 00:15:57.172 "uuid": "20b5e4ba-96a6-443c-9087-2efd8787bbee", 00:15:57.172 "strip_size_kb": 64, 00:15:57.172 "state": "configuring", 00:15:57.172 "raid_level": "raid0", 00:15:57.172 "superblock": true, 00:15:57.172 "num_base_bdevs": 3, 00:15:57.172 "num_base_bdevs_discovered": 1, 00:15:57.172 "num_base_bdevs_operational": 3, 00:15:57.172 "base_bdevs_list": [ 00:15:57.172 { 00:15:57.172 "name": "pt1", 00:15:57.172 "uuid": "3caa648e-8f4f-5d80-9933-0d70a4047d2b", 00:15:57.172 "is_configured": true, 00:15:57.172 "data_offset": 2048, 00:15:57.172 "data_size": 63488 00:15:57.172 }, 00:15:57.172 { 00:15:57.172 "name": null, 00:15:57.172 "uuid": "62ed8d3c-4137-5785-ba98-dc57e1085daa", 00:15:57.172 "is_configured": false, 00:15:57.172 "data_offset": 2048, 00:15:57.172 "data_size": 63488 00:15:57.172 }, 00:15:57.172 { 00:15:57.172 "name": null, 00:15:57.172 "uuid": "f9e2c76f-b96e-5a5b-a384-8f8b2613a132", 00:15:57.172 "is_configured": false, 00:15:57.172 "data_offset": 2048, 00:15:57.172 "data_size": 63488 00:15:57.172 } 00:15:57.172 ] 00:15:57.172 }' 00:15:57.172 10:41:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.172 10:41:23 -- common/autotest_common.sh@10 -- # set +x 00:15:57.795 10:41:24 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:57.795 10:41:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:57.795 10:41:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.054 [2024-07-24 10:41:24.715610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.054 [2024-07-24 10:41:24.716065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.054 [2024-07-24 10:41:24.716162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:58.054 [2024-07-24 10:41:24.716310] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.054 [2024-07-24 10:41:24.716924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.054 [2024-07-24 10:41:24.717111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.054 [2024-07-24 10:41:24.717351] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:58.054 [2024-07-24 10:41:24.717500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.054 pt2 00:15:58.054 10:41:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:58.054 10:41:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:58.054 10:41:24 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:58.313 [2024-07-24 10:41:24.967724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:58.313 [2024-07-24 10:41:24.968189] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.313 [2024-07-24 10:41:24.968277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:58.313 [2024-07-24 10:41:24.968654] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.313 [2024-07-24 10:41:24.969419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.313 [2024-07-24 10:41:24.969604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:58.313 [2024-07-24 10:41:24.969851] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:58.313 [2024-07-24 10:41:24.970018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:58.313 [2024-07-24 10:41:24.970310] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:58.313 [2024-07-24 10:41:24.970446] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:58.313 [2024-07-24 10:41:24.970588] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:58.313 [2024-07-24 10:41:24.971017] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:58.313 [2024-07-24 10:41:24.971173] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:58.313 [2024-07-24 10:41:24.971408] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.313 pt3 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.313 10:41:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.576 10:41:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.576 "name": "raid_bdev1", 00:15:58.576 "uuid": "20b5e4ba-96a6-443c-9087-2efd8787bbee", 00:15:58.576 "strip_size_kb": 64, 00:15:58.576 "state": "online", 00:15:58.576 "raid_level": "raid0", 00:15:58.576 "superblock": true, 00:15:58.576 "num_base_bdevs": 3, 00:15:58.576 "num_base_bdevs_discovered": 3, 00:15:58.576 "num_base_bdevs_operational": 3, 00:15:58.576 "base_bdevs_list": [ 00:15:58.576 { 00:15:58.576 "name": "pt1", 00:15:58.576 "uuid": "3caa648e-8f4f-5d80-9933-0d70a4047d2b", 00:15:58.576 "is_configured": true, 00:15:58.576 "data_offset": 2048, 00:15:58.576 "data_size": 63488 00:15:58.576 }, 00:15:58.576 { 00:15:58.576 "name": "pt2", 00:15:58.576 "uuid": "62ed8d3c-4137-5785-ba98-dc57e1085daa", 00:15:58.576 "is_configured": true, 00:15:58.576 "data_offset": 2048, 00:15:58.576 "data_size": 63488 00:15:58.576 }, 00:15:58.576 { 00:15:58.576 "name": "pt3", 00:15:58.576 "uuid": "f9e2c76f-b96e-5a5b-a384-8f8b2613a132", 00:15:58.576 "is_configured": true, 00:15:58.576 "data_offset": 2048, 00:15:58.576 "data_size": 63488 00:15:58.576 } 00:15:58.576 ] 00:15:58.576 }' 00:15:58.576 10:41:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.576 10:41:25 -- common/autotest_common.sh@10 -- # set +x 00:15:59.512 10:41:25 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:59.512 10:41:25 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:59.512 [2024-07-24 10:41:26.136482] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.512 10:41:26 -- bdev/bdev_raid.sh@430 -- # '[' 20b5e4ba-96a6-443c-9087-2efd8787bbee '!=' 20b5e4ba-96a6-443c-9087-2efd8787bbee ']' 00:15:59.512 10:41:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:59.512 10:41:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:59.512 10:41:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:59.512 10:41:26 -- bdev/bdev_raid.sh@511 -- # killprocess 126489 00:15:59.512 10:41:26 -- common/autotest_common.sh@926 -- # '[' -z 126489 ']' 00:15:59.512 10:41:26 -- common/autotest_common.sh@930 -- # kill -0 126489 00:15:59.512 10:41:26 -- common/autotest_common.sh@931 -- # uname 00:15:59.512 10:41:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:59.512 10:41:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126489 00:15:59.512 10:41:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:59.512 10:41:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:59.512 10:41:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126489' 00:15:59.512 killing process with pid 126489 00:15:59.512 10:41:26 -- common/autotest_common.sh@945 -- # kill 126489 00:15:59.512 [2024-07-24 10:41:26.179145] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.512 10:41:26 -- common/autotest_common.sh@950 -- # wait 126489 00:15:59.512 [2024-07-24 10:41:26.179421] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.512 [2024-07-24 10:41:26.179658] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.512 [2024-07-24 10:41:26.179796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:59.770 [2024-07-24 10:41:26.223832] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.027 ************************************ 00:16:00.027 END TEST raid_superblock_test 00:16:00.027 ************************************ 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:00.027 00:16:00.027 real 0m10.749s 00:16:00.027 user 0m19.352s 00:16:00.027 sys 0m1.490s 00:16:00.027 10:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.027 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:00.027 10:41:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:00.027 10:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:00.027 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:00.027 ************************************ 00:16:00.027 START TEST raid_state_function_test 00:16:00.027 ************************************ 00:16:00.027 10:41:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:00.027 10:41:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=126807 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126807' 00:16:00.028 Process raid pid: 126807 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:00.028 10:41:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126807 /var/tmp/spdk-raid.sock 00:16:00.028 10:41:26 -- common/autotest_common.sh@819 -- # '[' -z 126807 ']' 00:16:00.028 10:41:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:00.028 10:41:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:00.028 10:41:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:00.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:00.028 10:41:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:00.028 10:41:26 -- common/autotest_common.sh@10 -- # set +x 00:16:00.028 [2024-07-24 10:41:26.692911] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:00.028 [2024-07-24 10:41:26.693364] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.286 [2024-07-24 10:41:26.835157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.286 [2024-07-24 10:41:26.959307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.544 [2024-07-24 10:41:27.040508] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.111 10:41:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:01.111 10:41:27 -- common/autotest_common.sh@852 -- # return 0 00:16:01.111 10:41:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:01.373 [2024-07-24 10:41:27.913486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.373 [2024-07-24 10:41:27.913980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.373 [2024-07-24 10:41:27.914123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.373 [2024-07-24 10:41:27.914195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.373 [2024-07-24 10:41:27.914499] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.373 [2024-07-24 10:41:27.914700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.373 10:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.640 10:41:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.640 "name": "Existed_Raid", 00:16:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.640 "strip_size_kb": 64, 00:16:01.640 "state": "configuring", 00:16:01.640 "raid_level": "concat", 00:16:01.640 "superblock": false, 00:16:01.640 "num_base_bdevs": 3, 00:16:01.640 "num_base_bdevs_discovered": 0, 00:16:01.640 "num_base_bdevs_operational": 3, 00:16:01.640 "base_bdevs_list": [ 00:16:01.640 { 00:16:01.640 "name": "BaseBdev1", 00:16:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.640 "is_configured": false, 00:16:01.640 "data_offset": 0, 00:16:01.640 "data_size": 0 00:16:01.640 }, 00:16:01.640 { 00:16:01.640 "name": "BaseBdev2", 00:16:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.640 "is_configured": false, 00:16:01.640 "data_offset": 0, 00:16:01.640 "data_size": 0 00:16:01.640 }, 00:16:01.640 { 00:16:01.640 "name": "BaseBdev3", 00:16:01.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.640 "is_configured": false, 00:16:01.640 "data_offset": 0, 00:16:01.640 "data_size": 0 00:16:01.640 } 00:16:01.640 ] 00:16:01.640 }' 00:16:01.640 10:41:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.640 10:41:28 -- common/autotest_common.sh@10 -- # set +x 00:16:02.207 10:41:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.466 [2024-07-24 10:41:29.065555] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.466 [2024-07-24 10:41:29.065998] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:02.466 10:41:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:02.723 [2024-07-24 10:41:29.337653] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.723 [2024-07-24 10:41:29.338125] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.723 [2024-07-24 10:41:29.338249] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.723 [2024-07-24 10:41:29.338325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.723 [2024-07-24 10:41:29.338450] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.723 [2024-07-24 10:41:29.338539] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.723 10:41:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.981 [2024-07-24 10:41:29.585725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.981 BaseBdev1 00:16:02.981 10:41:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:02.981 10:41:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:02.981 10:41:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:02.981 10:41:29 -- common/autotest_common.sh@889 -- # local i 00:16:02.981 10:41:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:02.981 10:41:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:02.981 10:41:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.239 10:41:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.497 [ 00:16:03.497 { 00:16:03.497 "name": "BaseBdev1", 00:16:03.497 "aliases": [ 00:16:03.497 "51b23728-53fb-4dad-bba7-1e34beadd6b9" 00:16:03.497 ], 00:16:03.497 "product_name": "Malloc disk", 00:16:03.497 "block_size": 512, 00:16:03.497 "num_blocks": 65536, 00:16:03.497 "uuid": "51b23728-53fb-4dad-bba7-1e34beadd6b9", 00:16:03.497 "assigned_rate_limits": { 00:16:03.497 "rw_ios_per_sec": 0, 00:16:03.497 "rw_mbytes_per_sec": 0, 00:16:03.497 "r_mbytes_per_sec": 0, 00:16:03.497 "w_mbytes_per_sec": 0 00:16:03.497 }, 00:16:03.497 "claimed": true, 00:16:03.497 "claim_type": "exclusive_write", 00:16:03.497 "zoned": false, 00:16:03.497 "supported_io_types": { 00:16:03.497 "read": true, 00:16:03.497 "write": true, 00:16:03.497 "unmap": true, 00:16:03.497 "write_zeroes": true, 00:16:03.497 "flush": true, 00:16:03.497 "reset": true, 00:16:03.497 "compare": false, 00:16:03.497 "compare_and_write": false, 00:16:03.497 "abort": true, 00:16:03.497 "nvme_admin": false, 00:16:03.497 "nvme_io": false 00:16:03.497 }, 00:16:03.497 "memory_domains": [ 00:16:03.497 { 00:16:03.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.497 "dma_device_type": 2 00:16:03.497 } 00:16:03.497 ], 00:16:03.497 "driver_specific": {} 00:16:03.497 } 00:16:03.497 ] 00:16:03.497 10:41:30 -- common/autotest_common.sh@895 -- # return 0 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.497 10:41:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.755 10:41:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.755 "name": "Existed_Raid", 00:16:03.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.755 "strip_size_kb": 64, 00:16:03.755 "state": "configuring", 00:16:03.755 "raid_level": "concat", 00:16:03.755 "superblock": false, 00:16:03.755 "num_base_bdevs": 3, 00:16:03.755 "num_base_bdevs_discovered": 1, 00:16:03.755 "num_base_bdevs_operational": 3, 00:16:03.755 "base_bdevs_list": [ 00:16:03.755 { 00:16:03.755 "name": "BaseBdev1", 00:16:03.755 "uuid": "51b23728-53fb-4dad-bba7-1e34beadd6b9", 00:16:03.755 "is_configured": true, 00:16:03.755 "data_offset": 0, 00:16:03.755 "data_size": 65536 00:16:03.755 }, 00:16:03.755 { 00:16:03.755 "name": "BaseBdev2", 00:16:03.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.755 "is_configured": false, 00:16:03.755 "data_offset": 0, 00:16:03.755 "data_size": 0 00:16:03.755 }, 00:16:03.755 { 00:16:03.755 "name": "BaseBdev3", 00:16:03.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.755 "is_configured": false, 00:16:03.755 "data_offset": 0, 00:16:03.755 "data_size": 0 00:16:03.755 } 00:16:03.755 ] 00:16:03.755 }' 00:16:03.755 10:41:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.755 10:41:30 -- common/autotest_common.sh@10 -- # set +x 00:16:04.320 10:41:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:04.578 [2024-07-24 10:41:31.158260] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.578 [2024-07-24 10:41:31.158637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:04.578 10:41:31 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:04.578 10:41:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:04.836 [2024-07-24 10:41:31.386462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.836 [2024-07-24 10:41:31.389274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.836 [2024-07-24 10:41:31.389486] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.836 [2024-07-24 10:41:31.389658] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.836 [2024-07-24 10:41:31.389736] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.836 10:41:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.094 10:41:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.094 "name": "Existed_Raid", 00:16:05.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.094 "strip_size_kb": 64, 00:16:05.094 "state": "configuring", 00:16:05.094 "raid_level": "concat", 00:16:05.094 "superblock": false, 00:16:05.094 "num_base_bdevs": 3, 00:16:05.094 "num_base_bdevs_discovered": 1, 00:16:05.094 "num_base_bdevs_operational": 3, 00:16:05.094 "base_bdevs_list": [ 00:16:05.094 { 00:16:05.094 "name": "BaseBdev1", 00:16:05.094 "uuid": "51b23728-53fb-4dad-bba7-1e34beadd6b9", 00:16:05.094 "is_configured": true, 00:16:05.094 "data_offset": 0, 00:16:05.094 "data_size": 65536 00:16:05.094 }, 00:16:05.094 { 00:16:05.094 "name": "BaseBdev2", 00:16:05.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.094 "is_configured": false, 00:16:05.094 "data_offset": 0, 00:16:05.094 "data_size": 0 00:16:05.094 }, 00:16:05.094 { 00:16:05.094 "name": "BaseBdev3", 00:16:05.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.094 "is_configured": false, 00:16:05.094 "data_offset": 0, 00:16:05.094 "data_size": 0 00:16:05.094 } 00:16:05.094 ] 00:16:05.094 }' 00:16:05.094 10:41:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.094 10:41:31 -- common/autotest_common.sh@10 -- # set +x 00:16:05.659 10:41:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.223 [2024-07-24 10:41:32.608944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.223 BaseBdev2 00:16:06.223 10:41:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:06.223 10:41:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:06.223 10:41:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:06.223 10:41:32 -- common/autotest_common.sh@889 -- # local i 00:16:06.223 10:41:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:06.223 10:41:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:06.223 10:41:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.480 10:41:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.480 [ 00:16:06.480 { 00:16:06.480 "name": "BaseBdev2", 00:16:06.480 "aliases": [ 00:16:06.480 "45e48024-db2b-458e-96fc-6c430148ab15" 00:16:06.480 ], 00:16:06.481 "product_name": "Malloc disk", 00:16:06.481 "block_size": 512, 00:16:06.481 "num_blocks": 65536, 00:16:06.481 "uuid": "45e48024-db2b-458e-96fc-6c430148ab15", 00:16:06.481 "assigned_rate_limits": { 00:16:06.481 "rw_ios_per_sec": 0, 00:16:06.481 "rw_mbytes_per_sec": 0, 00:16:06.481 "r_mbytes_per_sec": 0, 00:16:06.481 "w_mbytes_per_sec": 0 00:16:06.481 }, 00:16:06.481 "claimed": true, 00:16:06.481 "claim_type": "exclusive_write", 00:16:06.481 "zoned": false, 00:16:06.481 "supported_io_types": { 00:16:06.481 "read": true, 00:16:06.481 "write": true, 00:16:06.481 "unmap": true, 00:16:06.481 "write_zeroes": true, 00:16:06.481 "flush": true, 00:16:06.481 "reset": true, 00:16:06.481 "compare": false, 00:16:06.481 "compare_and_write": false, 00:16:06.481 "abort": true, 00:16:06.481 "nvme_admin": false, 00:16:06.481 "nvme_io": false 00:16:06.481 }, 00:16:06.481 "memory_domains": [ 00:16:06.481 { 00:16:06.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.481 "dma_device_type": 2 00:16:06.481 } 00:16:06.481 ], 00:16:06.481 "driver_specific": {} 00:16:06.481 } 00:16:06.481 ] 00:16:06.481 10:41:33 -- common/autotest_common.sh@895 -- # return 0 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.481 10:41:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.044 10:41:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.044 "name": "Existed_Raid", 00:16:07.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.044 "strip_size_kb": 64, 00:16:07.044 "state": "configuring", 00:16:07.044 "raid_level": "concat", 00:16:07.044 "superblock": false, 00:16:07.044 "num_base_bdevs": 3, 00:16:07.044 "num_base_bdevs_discovered": 2, 00:16:07.044 "num_base_bdevs_operational": 3, 00:16:07.044 "base_bdevs_list": [ 00:16:07.044 { 00:16:07.044 "name": "BaseBdev1", 00:16:07.044 "uuid": "51b23728-53fb-4dad-bba7-1e34beadd6b9", 00:16:07.044 "is_configured": true, 00:16:07.045 "data_offset": 0, 00:16:07.045 "data_size": 65536 00:16:07.045 }, 00:16:07.045 { 00:16:07.045 "name": "BaseBdev2", 00:16:07.045 "uuid": "45e48024-db2b-458e-96fc-6c430148ab15", 00:16:07.045 "is_configured": true, 00:16:07.045 "data_offset": 0, 00:16:07.045 "data_size": 65536 00:16:07.045 }, 00:16:07.045 { 00:16:07.045 "name": "BaseBdev3", 00:16:07.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.045 "is_configured": false, 00:16:07.045 "data_offset": 0, 00:16:07.045 "data_size": 0 00:16:07.045 } 00:16:07.045 ] 00:16:07.045 }' 00:16:07.045 10:41:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.045 10:41:33 -- common/autotest_common.sh@10 -- # set +x 00:16:07.609 10:41:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.867 [2024-07-24 10:41:34.353966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.867 [2024-07-24 10:41:34.354357] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:07.867 [2024-07-24 10:41:34.354411] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:07.867 [2024-07-24 10:41:34.354747] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:07.867 [2024-07-24 10:41:34.355336] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:07.867 [2024-07-24 10:41:34.355470] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:07.867 [2024-07-24 10:41:34.355940] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.867 BaseBdev3 00:16:07.867 10:41:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:07.867 10:41:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:07.867 10:41:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:07.867 10:41:34 -- common/autotest_common.sh@889 -- # local i 00:16:07.867 10:41:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:07.867 10:41:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:07.867 10:41:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.124 10:41:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.382 [ 00:16:08.382 { 00:16:08.382 "name": "BaseBdev3", 00:16:08.382 "aliases": [ 00:16:08.382 "553d2ffc-c658-4cf4-b30a-7cf44b60f0d4" 00:16:08.382 ], 00:16:08.382 "product_name": "Malloc disk", 00:16:08.382 "block_size": 512, 00:16:08.382 "num_blocks": 65536, 00:16:08.382 "uuid": "553d2ffc-c658-4cf4-b30a-7cf44b60f0d4", 00:16:08.382 "assigned_rate_limits": { 00:16:08.382 "rw_ios_per_sec": 0, 00:16:08.382 "rw_mbytes_per_sec": 0, 00:16:08.382 "r_mbytes_per_sec": 0, 00:16:08.382 "w_mbytes_per_sec": 0 00:16:08.382 }, 00:16:08.382 "claimed": true, 00:16:08.382 "claim_type": "exclusive_write", 00:16:08.382 "zoned": false, 00:16:08.382 "supported_io_types": { 00:16:08.382 "read": true, 00:16:08.382 "write": true, 00:16:08.382 "unmap": true, 00:16:08.382 "write_zeroes": true, 00:16:08.383 "flush": true, 00:16:08.383 "reset": true, 00:16:08.383 "compare": false, 00:16:08.383 "compare_and_write": false, 00:16:08.383 "abort": true, 00:16:08.383 "nvme_admin": false, 00:16:08.383 "nvme_io": false 00:16:08.383 }, 00:16:08.383 "memory_domains": [ 00:16:08.383 { 00:16:08.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.383 "dma_device_type": 2 00:16:08.383 } 00:16:08.383 ], 00:16:08.383 "driver_specific": {} 00:16:08.383 } 00:16:08.383 ] 00:16:08.383 10:41:34 -- common/autotest_common.sh@895 -- # return 0 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.383 10:41:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.640 10:41:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.640 "name": "Existed_Raid", 00:16:08.640 "uuid": "3a7555f1-d3d2-46f2-95d1-4966e8b75a04", 00:16:08.640 "strip_size_kb": 64, 00:16:08.640 "state": "online", 00:16:08.640 "raid_level": "concat", 00:16:08.640 "superblock": false, 00:16:08.640 "num_base_bdevs": 3, 00:16:08.640 "num_base_bdevs_discovered": 3, 00:16:08.640 "num_base_bdevs_operational": 3, 00:16:08.640 "base_bdevs_list": [ 00:16:08.640 { 00:16:08.640 "name": "BaseBdev1", 00:16:08.640 "uuid": "51b23728-53fb-4dad-bba7-1e34beadd6b9", 00:16:08.640 "is_configured": true, 00:16:08.640 "data_offset": 0, 00:16:08.640 "data_size": 65536 00:16:08.640 }, 00:16:08.640 { 00:16:08.640 "name": "BaseBdev2", 00:16:08.640 "uuid": "45e48024-db2b-458e-96fc-6c430148ab15", 00:16:08.640 "is_configured": true, 00:16:08.640 "data_offset": 0, 00:16:08.640 "data_size": 65536 00:16:08.640 }, 00:16:08.640 { 00:16:08.640 "name": "BaseBdev3", 00:16:08.640 "uuid": "553d2ffc-c658-4cf4-b30a-7cf44b60f0d4", 00:16:08.640 "is_configured": true, 00:16:08.640 "data_offset": 0, 00:16:08.640 "data_size": 65536 00:16:08.640 } 00:16:08.640 ] 00:16:08.640 }' 00:16:08.640 10:41:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.640 10:41:35 -- common/autotest_common.sh@10 -- # set +x 00:16:09.205 10:41:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.463 [2024-07-24 10:41:36.038744] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.463 [2024-07-24 10:41:36.039161] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.463 [2024-07-24 10:41:36.039425] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.463 10:41:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.721 10:41:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.721 "name": "Existed_Raid", 00:16:09.721 "uuid": "3a7555f1-d3d2-46f2-95d1-4966e8b75a04", 00:16:09.721 "strip_size_kb": 64, 00:16:09.721 "state": "offline", 00:16:09.721 "raid_level": "concat", 00:16:09.721 "superblock": false, 00:16:09.721 "num_base_bdevs": 3, 00:16:09.721 "num_base_bdevs_discovered": 2, 00:16:09.721 "num_base_bdevs_operational": 2, 00:16:09.721 "base_bdevs_list": [ 00:16:09.721 { 00:16:09.721 "name": null, 00:16:09.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.721 "is_configured": false, 00:16:09.721 "data_offset": 0, 00:16:09.721 "data_size": 65536 00:16:09.721 }, 00:16:09.721 { 00:16:09.721 "name": "BaseBdev2", 00:16:09.721 "uuid": "45e48024-db2b-458e-96fc-6c430148ab15", 00:16:09.721 "is_configured": true, 00:16:09.721 "data_offset": 0, 00:16:09.721 "data_size": 65536 00:16:09.721 }, 00:16:09.721 { 00:16:09.721 "name": "BaseBdev3", 00:16:09.721 "uuid": "553d2ffc-c658-4cf4-b30a-7cf44b60f0d4", 00:16:09.721 "is_configured": true, 00:16:09.721 "data_offset": 0, 00:16:09.721 "data_size": 65536 00:16:09.721 } 00:16:09.721 ] 00:16:09.721 }' 00:16:09.721 10:41:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.721 10:41:36 -- common/autotest_common.sh@10 -- # set +x 00:16:10.654 10:41:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:10.654 10:41:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:10.655 10:41:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.655 10:41:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:10.655 10:41:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:10.655 10:41:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.655 10:41:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:10.912 [2024-07-24 10:41:37.550813] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.912 10:41:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:10.912 10:41:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:10.912 10:41:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.912 10:41:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:11.477 10:41:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:11.477 10:41:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.477 10:41:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:11.477 [2024-07-24 10:41:38.072951] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.477 [2024-07-24 10:41:38.073324] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:11.477 10:41:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:11.477 10:41:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:11.477 10:41:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:11.477 10:41:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.735 10:41:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:11.735 10:41:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:11.735 10:41:38 -- bdev/bdev_raid.sh@287 -- # killprocess 126807 00:16:11.735 10:41:38 -- common/autotest_common.sh@926 -- # '[' -z 126807 ']' 00:16:11.735 10:41:38 -- common/autotest_common.sh@930 -- # kill -0 126807 00:16:11.735 10:41:38 -- common/autotest_common.sh@931 -- # uname 00:16:11.735 10:41:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:11.735 10:41:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126807 00:16:11.735 10:41:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:11.735 10:41:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:11.735 10:41:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126807' 00:16:11.735 killing process with pid 126807 00:16:11.735 10:41:38 -- common/autotest_common.sh@945 -- # kill 126807 00:16:11.735 10:41:38 -- common/autotest_common.sh@950 -- # wait 126807 00:16:11.735 [2024-07-24 10:41:38.411223] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.735 [2024-07-24 10:41:38.411348] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:12.300 00:16:12.300 real 0m12.121s 00:16:12.300 user 0m22.146s 00:16:12.300 sys 0m1.569s 00:16:12.300 10:41:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.300 10:41:38 -- common/autotest_common.sh@10 -- # set +x 00:16:12.300 ************************************ 00:16:12.300 END TEST raid_state_function_test 00:16:12.300 ************************************ 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:12.300 10:41:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:12.300 10:41:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.300 10:41:38 -- common/autotest_common.sh@10 -- # set +x 00:16:12.300 ************************************ 00:16:12.300 START TEST raid_state_function_test_sb 00:16:12.300 ************************************ 00:16:12.300 10:41:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:12.300 10:41:38 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=127184 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127184' 00:16:12.301 Process raid pid: 127184 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:12.301 10:41:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127184 /var/tmp/spdk-raid.sock 00:16:12.301 10:41:38 -- common/autotest_common.sh@819 -- # '[' -z 127184 ']' 00:16:12.301 10:41:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.301 10:41:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:12.301 10:41:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.301 10:41:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:12.301 10:41:38 -- common/autotest_common.sh@10 -- # set +x 00:16:12.301 [2024-07-24 10:41:38.877859] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:12.301 [2024-07-24 10:41:38.878292] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.558 [2024-07-24 10:41:39.026269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.558 [2024-07-24 10:41:39.150619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.558 [2024-07-24 10:41:39.229070] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.493 10:41:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:13.493 10:41:39 -- common/autotest_common.sh@852 -- # return 0 00:16:13.493 10:41:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:13.493 [2024-07-24 10:41:40.086339] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.493 [2024-07-24 10:41:40.086847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.493 [2024-07-24 10:41:40.086985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.493 [2024-07-24 10:41:40.087056] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.493 [2024-07-24 10:41:40.087211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.493 [2024-07-24 10:41:40.087390] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.493 10:41:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.751 10:41:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.751 "name": "Existed_Raid", 00:16:13.751 "uuid": "25b995f6-56d2-4e39-98a0-abdc0ce48bec", 00:16:13.751 "strip_size_kb": 64, 00:16:13.751 "state": "configuring", 00:16:13.751 "raid_level": "concat", 00:16:13.751 "superblock": true, 00:16:13.751 "num_base_bdevs": 3, 00:16:13.751 "num_base_bdevs_discovered": 0, 00:16:13.751 "num_base_bdevs_operational": 3, 00:16:13.751 "base_bdevs_list": [ 00:16:13.751 { 00:16:13.751 "name": "BaseBdev1", 00:16:13.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.751 "is_configured": false, 00:16:13.751 "data_offset": 0, 00:16:13.751 "data_size": 0 00:16:13.751 }, 00:16:13.751 { 00:16:13.751 "name": "BaseBdev2", 00:16:13.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.751 "is_configured": false, 00:16:13.751 "data_offset": 0, 00:16:13.751 "data_size": 0 00:16:13.751 }, 00:16:13.751 { 00:16:13.751 "name": "BaseBdev3", 00:16:13.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.751 "is_configured": false, 00:16:13.751 "data_offset": 0, 00:16:13.751 "data_size": 0 00:16:13.751 } 00:16:13.751 ] 00:16:13.751 }' 00:16:13.751 10:41:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.751 10:41:40 -- common/autotest_common.sh@10 -- # set +x 00:16:14.684 10:41:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:14.942 [2024-07-24 10:41:41.374420] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.942 [2024-07-24 10:41:41.374790] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:14.942 10:41:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:14.942 [2024-07-24 10:41:41.610596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.942 [2024-07-24 10:41:41.611039] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.942 [2024-07-24 10:41:41.611179] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.942 [2024-07-24 10:41:41.611255] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.942 [2024-07-24 10:41:41.611405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.942 [2024-07-24 10:41:41.611575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.218 10:41:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.218 [2024-07-24 10:41:41.881798] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.218 BaseBdev1 00:16:15.218 10:41:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:15.218 10:41:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:15.218 10:41:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:15.218 10:41:41 -- common/autotest_common.sh@889 -- # local i 00:16:15.475 10:41:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:15.475 10:41:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:15.475 10:41:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.475 10:41:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.733 [ 00:16:15.733 { 00:16:15.733 "name": "BaseBdev1", 00:16:15.733 "aliases": [ 00:16:15.733 "647dc844-5fae-4811-bffa-6d68ad4a159d" 00:16:15.733 ], 00:16:15.733 "product_name": "Malloc disk", 00:16:15.733 "block_size": 512, 00:16:15.733 "num_blocks": 65536, 00:16:15.733 "uuid": "647dc844-5fae-4811-bffa-6d68ad4a159d", 00:16:15.733 "assigned_rate_limits": { 00:16:15.733 "rw_ios_per_sec": 0, 00:16:15.733 "rw_mbytes_per_sec": 0, 00:16:15.733 "r_mbytes_per_sec": 0, 00:16:15.733 "w_mbytes_per_sec": 0 00:16:15.733 }, 00:16:15.733 "claimed": true, 00:16:15.733 "claim_type": "exclusive_write", 00:16:15.733 "zoned": false, 00:16:15.733 "supported_io_types": { 00:16:15.733 "read": true, 00:16:15.733 "write": true, 00:16:15.733 "unmap": true, 00:16:15.733 "write_zeroes": true, 00:16:15.733 "flush": true, 00:16:15.733 "reset": true, 00:16:15.733 "compare": false, 00:16:15.733 "compare_and_write": false, 00:16:15.733 "abort": true, 00:16:15.733 "nvme_admin": false, 00:16:15.733 "nvme_io": false 00:16:15.733 }, 00:16:15.733 "memory_domains": [ 00:16:15.733 { 00:16:15.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.733 "dma_device_type": 2 00:16:15.733 } 00:16:15.733 ], 00:16:15.733 "driver_specific": {} 00:16:15.733 } 00:16:15.733 ] 00:16:15.733 10:41:42 -- common/autotest_common.sh@895 -- # return 0 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.733 10:41:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.298 10:41:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.298 "name": "Existed_Raid", 00:16:16.298 "uuid": "c6ee4c98-d982-4f71-ba5d-31f5d1193afa", 00:16:16.298 "strip_size_kb": 64, 00:16:16.298 "state": "configuring", 00:16:16.298 "raid_level": "concat", 00:16:16.298 "superblock": true, 00:16:16.298 "num_base_bdevs": 3, 00:16:16.298 "num_base_bdevs_discovered": 1, 00:16:16.298 "num_base_bdevs_operational": 3, 00:16:16.298 "base_bdevs_list": [ 00:16:16.298 { 00:16:16.298 "name": "BaseBdev1", 00:16:16.298 "uuid": "647dc844-5fae-4811-bffa-6d68ad4a159d", 00:16:16.298 "is_configured": true, 00:16:16.298 "data_offset": 2048, 00:16:16.298 "data_size": 63488 00:16:16.298 }, 00:16:16.298 { 00:16:16.298 "name": "BaseBdev2", 00:16:16.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.298 "is_configured": false, 00:16:16.298 "data_offset": 0, 00:16:16.298 "data_size": 0 00:16:16.298 }, 00:16:16.298 { 00:16:16.298 "name": "BaseBdev3", 00:16:16.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.298 "is_configured": false, 00:16:16.298 "data_offset": 0, 00:16:16.298 "data_size": 0 00:16:16.298 } 00:16:16.298 ] 00:16:16.298 }' 00:16:16.298 10:41:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.298 10:41:42 -- common/autotest_common.sh@10 -- # set +x 00:16:16.869 10:41:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:16.869 [2024-07-24 10:41:43.502355] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:16.869 [2024-07-24 10:41:43.502776] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:16.869 10:41:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:16.869 10:41:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:17.434 10:41:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:17.434 BaseBdev1 00:16:17.690 10:41:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:17.690 10:41:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:17.690 10:41:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:17.690 10:41:44 -- common/autotest_common.sh@889 -- # local i 00:16:17.690 10:41:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:17.690 10:41:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:17.690 10:41:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:17.690 10:41:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.947 [ 00:16:17.947 { 00:16:17.947 "name": "BaseBdev1", 00:16:17.947 "aliases": [ 00:16:17.947 "1a601b17-36b3-4a5f-b392-acb4cc6b3b34" 00:16:17.947 ], 00:16:17.947 "product_name": "Malloc disk", 00:16:17.947 "block_size": 512, 00:16:17.947 "num_blocks": 65536, 00:16:17.947 "uuid": "1a601b17-36b3-4a5f-b392-acb4cc6b3b34", 00:16:17.947 "assigned_rate_limits": { 00:16:17.947 "rw_ios_per_sec": 0, 00:16:17.947 "rw_mbytes_per_sec": 0, 00:16:17.947 "r_mbytes_per_sec": 0, 00:16:17.947 "w_mbytes_per_sec": 0 00:16:17.947 }, 00:16:17.947 "claimed": false, 00:16:17.947 "zoned": false, 00:16:17.947 "supported_io_types": { 00:16:17.947 "read": true, 00:16:17.947 "write": true, 00:16:17.947 "unmap": true, 00:16:17.947 "write_zeroes": true, 00:16:17.947 "flush": true, 00:16:17.947 "reset": true, 00:16:17.947 "compare": false, 00:16:17.947 "compare_and_write": false, 00:16:17.947 "abort": true, 00:16:17.947 "nvme_admin": false, 00:16:17.947 "nvme_io": false 00:16:17.947 }, 00:16:17.947 "memory_domains": [ 00:16:17.947 { 00:16:17.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.947 "dma_device_type": 2 00:16:17.947 } 00:16:17.947 ], 00:16:17.947 "driver_specific": {} 00:16:17.947 } 00:16:17.947 ] 00:16:17.947 10:41:44 -- common/autotest_common.sh@895 -- # return 0 00:16:17.947 10:41:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:18.204 [2024-07-24 10:41:44.827674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.204 [2024-07-24 10:41:44.830498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.204 [2024-07-24 10:41:44.830714] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.204 [2024-07-24 10:41:44.830845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.204 [2024-07-24 10:41:44.831008] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.204 10:41:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.461 10:41:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.461 "name": "Existed_Raid", 00:16:18.462 "uuid": "0b5e44f3-35d1-4903-bd97-7abd606cdff0", 00:16:18.462 "strip_size_kb": 64, 00:16:18.462 "state": "configuring", 00:16:18.462 "raid_level": "concat", 00:16:18.462 "superblock": true, 00:16:18.462 "num_base_bdevs": 3, 00:16:18.462 "num_base_bdevs_discovered": 1, 00:16:18.462 "num_base_bdevs_operational": 3, 00:16:18.462 "base_bdevs_list": [ 00:16:18.462 { 00:16:18.462 "name": "BaseBdev1", 00:16:18.462 "uuid": "1a601b17-36b3-4a5f-b392-acb4cc6b3b34", 00:16:18.462 "is_configured": true, 00:16:18.462 "data_offset": 2048, 00:16:18.462 "data_size": 63488 00:16:18.462 }, 00:16:18.462 { 00:16:18.462 "name": "BaseBdev2", 00:16:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.462 "is_configured": false, 00:16:18.462 "data_offset": 0, 00:16:18.462 "data_size": 0 00:16:18.462 }, 00:16:18.462 { 00:16:18.462 "name": "BaseBdev3", 00:16:18.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.462 "is_configured": false, 00:16:18.462 "data_offset": 0, 00:16:18.462 "data_size": 0 00:16:18.462 } 00:16:18.462 ] 00:16:18.462 }' 00:16:18.462 10:41:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.462 10:41:45 -- common/autotest_common.sh@10 -- # set +x 00:16:19.394 10:41:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.394 [2024-07-24 10:41:46.041581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.394 BaseBdev2 00:16:19.394 10:41:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:19.394 10:41:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:19.394 10:41:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:19.394 10:41:46 -- common/autotest_common.sh@889 -- # local i 00:16:19.394 10:41:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:19.394 10:41:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:19.394 10:41:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.651 10:41:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:19.929 [ 00:16:19.929 { 00:16:19.929 "name": "BaseBdev2", 00:16:19.929 "aliases": [ 00:16:19.929 "7427aaa0-0f15-436c-b707-7fd6db4651a0" 00:16:19.929 ], 00:16:19.929 "product_name": "Malloc disk", 00:16:19.929 "block_size": 512, 00:16:19.929 "num_blocks": 65536, 00:16:19.929 "uuid": "7427aaa0-0f15-436c-b707-7fd6db4651a0", 00:16:19.929 "assigned_rate_limits": { 00:16:19.929 "rw_ios_per_sec": 0, 00:16:19.929 "rw_mbytes_per_sec": 0, 00:16:19.929 "r_mbytes_per_sec": 0, 00:16:19.929 "w_mbytes_per_sec": 0 00:16:19.929 }, 00:16:19.929 "claimed": true, 00:16:19.929 "claim_type": "exclusive_write", 00:16:19.929 "zoned": false, 00:16:19.929 "supported_io_types": { 00:16:19.929 "read": true, 00:16:19.929 "write": true, 00:16:19.929 "unmap": true, 00:16:19.929 "write_zeroes": true, 00:16:19.929 "flush": true, 00:16:19.929 "reset": true, 00:16:19.929 "compare": false, 00:16:19.929 "compare_and_write": false, 00:16:19.929 "abort": true, 00:16:19.929 "nvme_admin": false, 00:16:19.929 "nvme_io": false 00:16:19.929 }, 00:16:19.929 "memory_domains": [ 00:16:19.929 { 00:16:19.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.929 "dma_device_type": 2 00:16:19.929 } 00:16:19.929 ], 00:16:19.929 "driver_specific": {} 00:16:19.929 } 00:16:19.929 ] 00:16:19.929 10:41:46 -- common/autotest_common.sh@895 -- # return 0 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.929 10:41:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.198 10:41:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.198 "name": "Existed_Raid", 00:16:20.198 "uuid": "0b5e44f3-35d1-4903-bd97-7abd606cdff0", 00:16:20.198 "strip_size_kb": 64, 00:16:20.198 "state": "configuring", 00:16:20.198 "raid_level": "concat", 00:16:20.198 "superblock": true, 00:16:20.198 "num_base_bdevs": 3, 00:16:20.198 "num_base_bdevs_discovered": 2, 00:16:20.198 "num_base_bdevs_operational": 3, 00:16:20.198 "base_bdevs_list": [ 00:16:20.198 { 00:16:20.198 "name": "BaseBdev1", 00:16:20.198 "uuid": "1a601b17-36b3-4a5f-b392-acb4cc6b3b34", 00:16:20.198 "is_configured": true, 00:16:20.198 "data_offset": 2048, 00:16:20.198 "data_size": 63488 00:16:20.198 }, 00:16:20.198 { 00:16:20.198 "name": "BaseBdev2", 00:16:20.198 "uuid": "7427aaa0-0f15-436c-b707-7fd6db4651a0", 00:16:20.198 "is_configured": true, 00:16:20.198 "data_offset": 2048, 00:16:20.198 "data_size": 63488 00:16:20.198 }, 00:16:20.198 { 00:16:20.198 "name": "BaseBdev3", 00:16:20.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.198 "is_configured": false, 00:16:20.198 "data_offset": 0, 00:16:20.198 "data_size": 0 00:16:20.198 } 00:16:20.198 ] 00:16:20.198 }' 00:16:20.198 10:41:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.198 10:41:46 -- common/autotest_common.sh@10 -- # set +x 00:16:20.764 10:41:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.022 [2024-07-24 10:41:47.670992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.022 [2024-07-24 10:41:47.671713] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:21.022 [2024-07-24 10:41:47.671857] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.022 [2024-07-24 10:41:47.672134] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:21.022 [2024-07-24 10:41:47.672735] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:21.022 [2024-07-24 10:41:47.672865] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:21.022 BaseBdev3 00:16:21.022 [2024-07-24 10:41:47.673188] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.022 10:41:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:21.022 10:41:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:21.022 10:41:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:21.022 10:41:47 -- common/autotest_common.sh@889 -- # local i 00:16:21.022 10:41:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:21.022 10:41:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:21.022 10:41:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.280 10:41:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.538 [ 00:16:21.538 { 00:16:21.538 "name": "BaseBdev3", 00:16:21.538 "aliases": [ 00:16:21.538 "18748bfc-6b33-4dac-9a93-82ebb25c284e" 00:16:21.538 ], 00:16:21.538 "product_name": "Malloc disk", 00:16:21.538 "block_size": 512, 00:16:21.538 "num_blocks": 65536, 00:16:21.538 "uuid": "18748bfc-6b33-4dac-9a93-82ebb25c284e", 00:16:21.538 "assigned_rate_limits": { 00:16:21.538 "rw_ios_per_sec": 0, 00:16:21.538 "rw_mbytes_per_sec": 0, 00:16:21.538 "r_mbytes_per_sec": 0, 00:16:21.538 "w_mbytes_per_sec": 0 00:16:21.538 }, 00:16:21.538 "claimed": true, 00:16:21.538 "claim_type": "exclusive_write", 00:16:21.538 "zoned": false, 00:16:21.538 "supported_io_types": { 00:16:21.538 "read": true, 00:16:21.538 "write": true, 00:16:21.538 "unmap": true, 00:16:21.538 "write_zeroes": true, 00:16:21.538 "flush": true, 00:16:21.538 "reset": true, 00:16:21.538 "compare": false, 00:16:21.538 "compare_and_write": false, 00:16:21.538 "abort": true, 00:16:21.538 "nvme_admin": false, 00:16:21.538 "nvme_io": false 00:16:21.538 }, 00:16:21.538 "memory_domains": [ 00:16:21.538 { 00:16:21.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.538 "dma_device_type": 2 00:16:21.538 } 00:16:21.538 ], 00:16:21.538 "driver_specific": {} 00:16:21.538 } 00:16:21.538 ] 00:16:21.796 10:41:48 -- common/autotest_common.sh@895 -- # return 0 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.796 10:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.055 10:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.055 "name": "Existed_Raid", 00:16:22.055 "uuid": "0b5e44f3-35d1-4903-bd97-7abd606cdff0", 00:16:22.055 "strip_size_kb": 64, 00:16:22.055 "state": "online", 00:16:22.055 "raid_level": "concat", 00:16:22.055 "superblock": true, 00:16:22.055 "num_base_bdevs": 3, 00:16:22.055 "num_base_bdevs_discovered": 3, 00:16:22.055 "num_base_bdevs_operational": 3, 00:16:22.055 "base_bdevs_list": [ 00:16:22.055 { 00:16:22.055 "name": "BaseBdev1", 00:16:22.055 "uuid": "1a601b17-36b3-4a5f-b392-acb4cc6b3b34", 00:16:22.055 "is_configured": true, 00:16:22.055 "data_offset": 2048, 00:16:22.055 "data_size": 63488 00:16:22.055 }, 00:16:22.055 { 00:16:22.055 "name": "BaseBdev2", 00:16:22.055 "uuid": "7427aaa0-0f15-436c-b707-7fd6db4651a0", 00:16:22.055 "is_configured": true, 00:16:22.055 "data_offset": 2048, 00:16:22.055 "data_size": 63488 00:16:22.055 }, 00:16:22.055 { 00:16:22.055 "name": "BaseBdev3", 00:16:22.055 "uuid": "18748bfc-6b33-4dac-9a93-82ebb25c284e", 00:16:22.055 "is_configured": true, 00:16:22.055 "data_offset": 2048, 00:16:22.055 "data_size": 63488 00:16:22.055 } 00:16:22.055 ] 00:16:22.055 }' 00:16:22.055 10:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.055 10:41:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.620 10:41:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:22.878 [2024-07-24 10:41:49.412183] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.878 [2024-07-24 10:41:49.412392] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.878 [2024-07-24 10:41:49.412604] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.878 10:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.136 10:41:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.137 "name": "Existed_Raid", 00:16:23.137 "uuid": "0b5e44f3-35d1-4903-bd97-7abd606cdff0", 00:16:23.137 "strip_size_kb": 64, 00:16:23.137 "state": "offline", 00:16:23.137 "raid_level": "concat", 00:16:23.137 "superblock": true, 00:16:23.137 "num_base_bdevs": 3, 00:16:23.137 "num_base_bdevs_discovered": 2, 00:16:23.137 "num_base_bdevs_operational": 2, 00:16:23.137 "base_bdevs_list": [ 00:16:23.137 { 00:16:23.137 "name": null, 00:16:23.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.137 "is_configured": false, 00:16:23.137 "data_offset": 2048, 00:16:23.137 "data_size": 63488 00:16:23.137 }, 00:16:23.137 { 00:16:23.137 "name": "BaseBdev2", 00:16:23.137 "uuid": "7427aaa0-0f15-436c-b707-7fd6db4651a0", 00:16:23.137 "is_configured": true, 00:16:23.137 "data_offset": 2048, 00:16:23.137 "data_size": 63488 00:16:23.137 }, 00:16:23.137 { 00:16:23.137 "name": "BaseBdev3", 00:16:23.137 "uuid": "18748bfc-6b33-4dac-9a93-82ebb25c284e", 00:16:23.137 "is_configured": true, 00:16:23.137 "data_offset": 2048, 00:16:23.137 "data_size": 63488 00:16:23.137 } 00:16:23.137 ] 00:16:23.137 }' 00:16:23.137 10:41:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.137 10:41:49 -- common/autotest_common.sh@10 -- # set +x 00:16:23.703 10:41:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:23.703 10:41:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:23.703 10:41:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.703 10:41:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:23.961 10:41:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:23.961 10:41:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:23.961 10:41:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:24.219 [2024-07-24 10:41:50.881929] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.477 10:41:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:24.477 10:41:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.477 10:41:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.477 10:41:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:24.751 10:41:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.751 10:41:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.751 10:41:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:24.751 [2024-07-24 10:41:51.412091] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:24.751 [2024-07-24 10:41:51.412416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:25.038 10:41:51 -- bdev/bdev_raid.sh@287 -- # killprocess 127184 00:16:25.038 10:41:51 -- common/autotest_common.sh@926 -- # '[' -z 127184 ']' 00:16:25.038 10:41:51 -- common/autotest_common.sh@930 -- # kill -0 127184 00:16:25.038 10:41:51 -- common/autotest_common.sh@931 -- # uname 00:16:25.297 10:41:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:25.297 10:41:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127184 00:16:25.297 killing process with pid 127184 00:16:25.297 10:41:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:25.297 10:41:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:25.297 10:41:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127184' 00:16:25.297 10:41:51 -- common/autotest_common.sh@945 -- # kill 127184 00:16:25.297 10:41:51 -- common/autotest_common.sh@950 -- # wait 127184 00:16:25.297 [2024-07-24 10:41:51.744078] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.297 [2024-07-24 10:41:51.744177] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:25.555 ************************************ 00:16:25.555 END TEST raid_state_function_test_sb 00:16:25.555 ************************************ 00:16:25.555 10:41:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:25.555 00:16:25.555 real 0m13.177s 00:16:25.555 user 0m24.038s 00:16:25.555 sys 0m1.796s 00:16:25.555 10:41:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.555 10:41:51 -- common/autotest_common.sh@10 -- # set +x 00:16:25.555 10:41:52 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:25.555 10:41:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:25.555 10:41:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:25.555 10:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:25.555 ************************************ 00:16:25.555 START TEST raid_superblock_test 00:16:25.555 ************************************ 00:16:25.556 10:41:52 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@357 -- # raid_pid=127583 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:25.556 10:41:52 -- bdev/bdev_raid.sh@358 -- # waitforlisten 127583 /var/tmp/spdk-raid.sock 00:16:25.556 10:41:52 -- common/autotest_common.sh@819 -- # '[' -z 127583 ']' 00:16:25.556 10:41:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:25.556 10:41:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:25.556 10:41:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:25.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:25.556 10:41:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:25.556 10:41:52 -- common/autotest_common.sh@10 -- # set +x 00:16:25.556 [2024-07-24 10:41:52.105156] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:25.556 [2024-07-24 10:41:52.105768] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127583 ] 00:16:25.814 [2024-07-24 10:41:52.253998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.814 [2024-07-24 10:41:52.382941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.814 [2024-07-24 10:41:52.458653] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:26.380 10:41:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:26.380 10:41:53 -- common/autotest_common.sh@852 -- # return 0 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.380 10:41:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:26.638 malloc1 00:16:26.638 10:41:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.894 [2024-07-24 10:41:53.510677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.894 [2024-07-24 10:41:53.511219] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.894 [2024-07-24 10:41:53.511409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:16:26.894 [2024-07-24 10:41:53.511624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.894 [2024-07-24 10:41:53.514814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.894 [2024-07-24 10:41:53.515021] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.894 pt1 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.894 10:41:53 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.895 10:41:53 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:27.151 malloc2 00:16:27.151 10:41:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.715 [2024-07-24 10:41:54.106457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.715 [2024-07-24 10:41:54.106887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.715 [2024-07-24 10:41:54.106996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:27.715 [2024-07-24 10:41:54.107312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.715 [2024-07-24 10:41:54.110283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.715 [2024-07-24 10:41:54.110468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.715 pt2 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:27.715 malloc3 00:16:27.715 10:41:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:27.972 [2024-07-24 10:41:54.626587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:27.972 [2024-07-24 10:41:54.627115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.972 [2024-07-24 10:41:54.627234] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:27.972 [2024-07-24 10:41:54.627554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.972 [2024-07-24 10:41:54.630464] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.972 [2024-07-24 10:41:54.630662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:27.972 pt3 00:16:27.972 10:41:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:27.972 10:41:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.972 10:41:54 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:28.229 [2024-07-24 10:41:54.855278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.229 [2024-07-24 10:41:54.858078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.229 [2024-07-24 10:41:54.858306] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.229 [2024-07-24 10:41:54.858712] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:28.229 [2024-07-24 10:41:54.858848] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.229 [2024-07-24 10:41:54.859199] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:16:28.229 [2024-07-24 10:41:54.859861] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:28.229 [2024-07-24 10:41:54.859997] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:16:28.229 [2024-07-24 10:41:54.860357] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.229 10:41:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.486 10:41:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.486 "name": "raid_bdev1", 00:16:28.486 "uuid": "2aa2c3b7-e30d-4902-9ae1-516b9a4690f5", 00:16:28.486 "strip_size_kb": 64, 00:16:28.486 "state": "online", 00:16:28.486 "raid_level": "concat", 00:16:28.486 "superblock": true, 00:16:28.486 "num_base_bdevs": 3, 00:16:28.486 "num_base_bdevs_discovered": 3, 00:16:28.486 "num_base_bdevs_operational": 3, 00:16:28.486 "base_bdevs_list": [ 00:16:28.486 { 00:16:28.486 "name": "pt1", 00:16:28.486 "uuid": "3a91c417-60dd-53c1-a354-6b4e6a922faf", 00:16:28.486 "is_configured": true, 00:16:28.486 "data_offset": 2048, 00:16:28.486 "data_size": 63488 00:16:28.486 }, 00:16:28.486 { 00:16:28.486 "name": "pt2", 00:16:28.486 "uuid": "c66faad4-13e9-5b26-a0fa-e97b55254a37", 00:16:28.486 "is_configured": true, 00:16:28.486 "data_offset": 2048, 00:16:28.486 "data_size": 63488 00:16:28.486 }, 00:16:28.486 { 00:16:28.486 "name": "pt3", 00:16:28.486 "uuid": "b06095ae-7bc4-5b45-930c-fccb6235a6b5", 00:16:28.486 "is_configured": true, 00:16:28.486 "data_offset": 2048, 00:16:28.486 "data_size": 63488 00:16:28.486 } 00:16:28.486 ] 00:16:28.486 }' 00:16:28.486 10:41:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.486 10:41:55 -- common/autotest_common.sh@10 -- # set +x 00:16:29.420 10:41:55 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.420 10:41:55 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:29.420 [2024-07-24 10:41:55.984961] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.420 10:41:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2aa2c3b7-e30d-4902-9ae1-516b9a4690f5 00:16:29.420 10:41:56 -- bdev/bdev_raid.sh@380 -- # '[' -z 2aa2c3b7-e30d-4902-9ae1-516b9a4690f5 ']' 00:16:29.420 10:41:56 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:29.680 [2024-07-24 10:41:56.220747] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.680 [2024-07-24 10:41:56.221068] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.680 [2024-07-24 10:41:56.221352] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.680 [2024-07-24 10:41:56.221610] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.680 [2024-07-24 10:41:56.221741] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:16:29.680 10:41:56 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.680 10:41:56 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:29.938 10:41:56 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:29.938 10:41:56 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:29.938 10:41:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:29.938 10:41:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:30.196 10:41:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.196 10:41:56 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:30.454 10:41:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.454 10:41:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:30.711 10:41:57 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:30.711 10:41:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:30.969 10:41:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:30.969 10:41:57 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:30.969 10:41:57 -- common/autotest_common.sh@640 -- # local es=0 00:16:30.969 10:41:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:30.969 10:41:57 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.969 10:41:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:30.969 10:41:57 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.969 10:41:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:30.969 10:41:57 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.969 10:41:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:30.969 10:41:57 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.969 10:41:57 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:30.969 10:41:57 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:31.226 [2024-07-24 10:41:57.725142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.226 [2024-07-24 10:41:57.728036] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.227 [2024-07-24 10:41:57.728254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:31.227 [2024-07-24 10:41:57.728375] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:31.227 [2024-07-24 10:41:57.728673] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:31.227 [2024-07-24 10:41:57.728850] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:31.227 [2024-07-24 10:41:57.729036] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.227 [2024-07-24 10:41:57.729160] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:16:31.227 request: 00:16:31.227 { 00:16:31.227 "name": "raid_bdev1", 00:16:31.227 "raid_level": "concat", 00:16:31.227 "base_bdevs": [ 00:16:31.227 "malloc1", 00:16:31.227 "malloc2", 00:16:31.227 "malloc3" 00:16:31.227 ], 00:16:31.227 "superblock": false, 00:16:31.227 "strip_size_kb": 64, 00:16:31.227 "method": "bdev_raid_create", 00:16:31.227 "req_id": 1 00:16:31.227 } 00:16:31.227 Got JSON-RPC error response 00:16:31.227 response: 00:16:31.227 { 00:16:31.227 "code": -17, 00:16:31.227 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.227 } 00:16:31.227 10:41:57 -- common/autotest_common.sh@643 -- # es=1 00:16:31.227 10:41:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:31.227 10:41:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:31.227 10:41:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:31.227 10:41:57 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.227 10:41:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:31.484 10:41:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:31.484 10:41:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:31.484 10:41:57 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.741 [2024-07-24 10:41:58.225726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.741 [2024-07-24 10:41:58.226144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.741 [2024-07-24 10:41:58.226350] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:31.741 [2024-07-24 10:41:58.226505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.741 [2024-07-24 10:41:58.229471] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.741 [2024-07-24 10:41:58.229658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.741 [2024-07-24 10:41:58.229919] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:31.741 [2024-07-24 10:41:58.230130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.741 pt1 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.741 10:41:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.999 10:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.999 "name": "raid_bdev1", 00:16:31.999 "uuid": "2aa2c3b7-e30d-4902-9ae1-516b9a4690f5", 00:16:31.999 "strip_size_kb": 64, 00:16:31.999 "state": "configuring", 00:16:31.999 "raid_level": "concat", 00:16:31.999 "superblock": true, 00:16:31.999 "num_base_bdevs": 3, 00:16:31.999 "num_base_bdevs_discovered": 1, 00:16:31.999 "num_base_bdevs_operational": 3, 00:16:31.999 "base_bdevs_list": [ 00:16:31.999 { 00:16:31.999 "name": "pt1", 00:16:31.999 "uuid": "3a91c417-60dd-53c1-a354-6b4e6a922faf", 00:16:31.999 "is_configured": true, 00:16:31.999 "data_offset": 2048, 00:16:31.999 "data_size": 63488 00:16:31.999 }, 00:16:31.999 { 00:16:31.999 "name": null, 00:16:31.999 "uuid": "c66faad4-13e9-5b26-a0fa-e97b55254a37", 00:16:31.999 "is_configured": false, 00:16:31.999 "data_offset": 2048, 00:16:31.999 "data_size": 63488 00:16:31.999 }, 00:16:31.999 { 00:16:31.999 "name": null, 00:16:31.999 "uuid": "b06095ae-7bc4-5b45-930c-fccb6235a6b5", 00:16:31.999 "is_configured": false, 00:16:31.999 "data_offset": 2048, 00:16:31.999 "data_size": 63488 00:16:31.999 } 00:16:31.999 ] 00:16:31.999 }' 00:16:31.999 10:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.999 10:41:58 -- common/autotest_common.sh@10 -- # set +x 00:16:32.565 10:41:59 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:32.565 10:41:59 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:32.822 [2024-07-24 10:41:59.430415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:32.822 [2024-07-24 10:41:59.430847] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.822 [2024-07-24 10:41:59.431047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:32.822 [2024-07-24 10:41:59.431251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.822 [2024-07-24 10:41:59.431949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.822 [2024-07-24 10:41:59.432128] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:32.822 [2024-07-24 10:41:59.432376] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:32.822 [2024-07-24 10:41:59.432525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:32.822 pt2 00:16:32.822 10:41:59 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:33.080 [2024-07-24 10:41:59.702528] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.080 10:41:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.337 10:41:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.337 "name": "raid_bdev1", 00:16:33.337 "uuid": "2aa2c3b7-e30d-4902-9ae1-516b9a4690f5", 00:16:33.337 "strip_size_kb": 64, 00:16:33.337 "state": "configuring", 00:16:33.337 "raid_level": "concat", 00:16:33.337 "superblock": true, 00:16:33.337 "num_base_bdevs": 3, 00:16:33.337 "num_base_bdevs_discovered": 1, 00:16:33.337 "num_base_bdevs_operational": 3, 00:16:33.337 "base_bdevs_list": [ 00:16:33.337 { 00:16:33.337 "name": "pt1", 00:16:33.338 "uuid": "3a91c417-60dd-53c1-a354-6b4e6a922faf", 00:16:33.338 "is_configured": true, 00:16:33.338 "data_offset": 2048, 00:16:33.338 "data_size": 63488 00:16:33.338 }, 00:16:33.338 { 00:16:33.338 "name": null, 00:16:33.338 "uuid": "c66faad4-13e9-5b26-a0fa-e97b55254a37", 00:16:33.338 "is_configured": false, 00:16:33.338 "data_offset": 2048, 00:16:33.338 "data_size": 63488 00:16:33.338 }, 00:16:33.338 { 00:16:33.338 "name": null, 00:16:33.338 "uuid": "b06095ae-7bc4-5b45-930c-fccb6235a6b5", 00:16:33.338 "is_configured": false, 00:16:33.338 "data_offset": 2048, 00:16:33.338 "data_size": 63488 00:16:33.338 } 00:16:33.338 ] 00:16:33.338 }' 00:16:33.338 10:41:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.338 10:41:59 -- common/autotest_common.sh@10 -- # set +x 00:16:34.271 10:42:00 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:34.271 10:42:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.271 10:42:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.271 [2024-07-24 10:42:00.906751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.271 [2024-07-24 10:42:00.907201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.271 [2024-07-24 10:42:00.907309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:34.271 [2024-07-24 10:42:00.907672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.271 [2024-07-24 10:42:00.908298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.271 [2024-07-24 10:42:00.908488] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.271 [2024-07-24 10:42:00.908736] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:34.271 [2024-07-24 10:42:00.908889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.271 pt2 00:16:34.271 10:42:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.271 10:42:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.271 10:42:00 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:34.530 [2024-07-24 10:42:01.150950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:34.530 [2024-07-24 10:42:01.151379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.530 [2024-07-24 10:42:01.151475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:34.530 [2024-07-24 10:42:01.151815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.530 [2024-07-24 10:42:01.152492] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.530 [2024-07-24 10:42:01.152672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:34.530 [2024-07-24 10:42:01.152938] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:34.530 [2024-07-24 10:42:01.153076] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.530 [2024-07-24 10:42:01.153373] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:34.530 [2024-07-24 10:42:01.153483] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:34.530 [2024-07-24 10:42:01.153617] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:34.530 [2024-07-24 10:42:01.154033] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:34.530 [2024-07-24 10:42:01.154162] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:34.530 [2024-07-24 10:42:01.154375] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.530 pt3 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.530 10:42:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.531 10:42:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.531 10:42:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.531 10:42:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.531 10:42:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.787 10:42:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.787 "name": "raid_bdev1", 00:16:34.787 "uuid": "2aa2c3b7-e30d-4902-9ae1-516b9a4690f5", 00:16:34.787 "strip_size_kb": 64, 00:16:34.787 "state": "online", 00:16:34.787 "raid_level": "concat", 00:16:34.787 "superblock": true, 00:16:34.787 "num_base_bdevs": 3, 00:16:34.787 "num_base_bdevs_discovered": 3, 00:16:34.787 "num_base_bdevs_operational": 3, 00:16:34.787 "base_bdevs_list": [ 00:16:34.787 { 00:16:34.787 "name": "pt1", 00:16:34.787 "uuid": "3a91c417-60dd-53c1-a354-6b4e6a922faf", 00:16:34.787 "is_configured": true, 00:16:34.787 "data_offset": 2048, 00:16:34.787 "data_size": 63488 00:16:34.787 }, 00:16:34.787 { 00:16:34.787 "name": "pt2", 00:16:34.787 "uuid": "c66faad4-13e9-5b26-a0fa-e97b55254a37", 00:16:34.787 "is_configured": true, 00:16:34.787 "data_offset": 2048, 00:16:34.787 "data_size": 63488 00:16:34.787 }, 00:16:34.787 { 00:16:34.787 "name": "pt3", 00:16:34.787 "uuid": "b06095ae-7bc4-5b45-930c-fccb6235a6b5", 00:16:34.787 "is_configured": true, 00:16:34.787 "data_offset": 2048, 00:16:34.787 "data_size": 63488 00:16:34.787 } 00:16:34.787 ] 00:16:34.787 }' 00:16:34.787 10:42:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.787 10:42:01 -- common/autotest_common.sh@10 -- # set +x 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:35.719 [2024-07-24 10:42:02.339525] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@430 -- # '[' 2aa2c3b7-e30d-4902-9ae1-516b9a4690f5 '!=' 2aa2c3b7-e30d-4902-9ae1-516b9a4690f5 ']' 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:35.719 10:42:02 -- bdev/bdev_raid.sh@511 -- # killprocess 127583 00:16:35.719 10:42:02 -- common/autotest_common.sh@926 -- # '[' -z 127583 ']' 00:16:35.719 10:42:02 -- common/autotest_common.sh@930 -- # kill -0 127583 00:16:35.719 10:42:02 -- common/autotest_common.sh@931 -- # uname 00:16:35.719 10:42:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:35.719 10:42:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127583 00:16:35.719 killing process with pid 127583 00:16:35.719 10:42:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:35.719 10:42:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:35.719 10:42:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127583' 00:16:35.719 10:42:02 -- common/autotest_common.sh@945 -- # kill 127583 00:16:35.719 10:42:02 -- common/autotest_common.sh@950 -- # wait 127583 00:16:35.719 [2024-07-24 10:42:02.387032] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:35.719 [2024-07-24 10:42:02.387149] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.719 [2024-07-24 10:42:02.387224] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.719 [2024-07-24 10:42:02.387254] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:35.976 [2024-07-24 10:42:02.433252] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:36.234 ************************************ 00:16:36.234 END TEST raid_superblock_test 00:16:36.234 ************************************ 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:36.234 00:16:36.234 real 0m10.730s 00:16:36.234 user 0m19.482s 00:16:36.234 sys 0m1.328s 00:16:36.234 10:42:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.234 10:42:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:36.234 10:42:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:36.234 10:42:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:36.234 10:42:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.234 ************************************ 00:16:36.234 START TEST raid_state_function_test 00:16:36.234 ************************************ 00:16:36.234 10:42:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=127888 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 127888' 00:16:36.234 Process raid pid: 127888 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:36.234 10:42:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 127888 /var/tmp/spdk-raid.sock 00:16:36.234 10:42:02 -- common/autotest_common.sh@819 -- # '[' -z 127888 ']' 00:16:36.234 10:42:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:36.234 10:42:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:36.234 10:42:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:36.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:36.234 10:42:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:36.234 10:42:02 -- common/autotest_common.sh@10 -- # set +x 00:16:36.234 [2024-07-24 10:42:02.896006] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:36.234 [2024-07-24 10:42:02.896486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.492 [2024-07-24 10:42:03.034229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.492 [2024-07-24 10:42:03.153002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.750 [2024-07-24 10:42:03.230294] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.316 10:42:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:37.316 10:42:03 -- common/autotest_common.sh@852 -- # return 0 00:16:37.316 10:42:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:37.574 [2024-07-24 10:42:04.142236] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.574 [2024-07-24 10:42:04.142753] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.574 [2024-07-24 10:42:04.142884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.574 [2024-07-24 10:42:04.142953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.574 [2024-07-24 10:42:04.143102] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.574 [2024-07-24 10:42:04.143292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.574 10:42:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.831 10:42:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.831 "name": "Existed_Raid", 00:16:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.831 "strip_size_kb": 0, 00:16:37.831 "state": "configuring", 00:16:37.831 "raid_level": "raid1", 00:16:37.831 "superblock": false, 00:16:37.831 "num_base_bdevs": 3, 00:16:37.831 "num_base_bdevs_discovered": 0, 00:16:37.831 "num_base_bdevs_operational": 3, 00:16:37.831 "base_bdevs_list": [ 00:16:37.831 { 00:16:37.831 "name": "BaseBdev1", 00:16:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.831 "is_configured": false, 00:16:37.831 "data_offset": 0, 00:16:37.831 "data_size": 0 00:16:37.831 }, 00:16:37.831 { 00:16:37.831 "name": "BaseBdev2", 00:16:37.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.832 "is_configured": false, 00:16:37.832 "data_offset": 0, 00:16:37.832 "data_size": 0 00:16:37.832 }, 00:16:37.832 { 00:16:37.832 "name": "BaseBdev3", 00:16:37.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.832 "is_configured": false, 00:16:37.832 "data_offset": 0, 00:16:37.832 "data_size": 0 00:16:37.832 } 00:16:37.832 ] 00:16:37.832 }' 00:16:37.832 10:42:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.832 10:42:04 -- common/autotest_common.sh@10 -- # set +x 00:16:38.397 10:42:05 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:38.654 [2024-07-24 10:42:05.286308] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.654 [2024-07-24 10:42:05.286703] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:38.654 10:42:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:38.912 [2024-07-24 10:42:05.562492] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.912 [2024-07-24 10:42:05.562912] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.912 [2024-07-24 10:42:05.563038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.912 [2024-07-24 10:42:05.563201] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.912 [2024-07-24 10:42:05.563308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.912 [2024-07-24 10:42:05.563460] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.912 10:42:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.170 [2024-07-24 10:42:05.817830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.170 BaseBdev1 00:16:39.170 10:42:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:39.170 10:42:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:39.170 10:42:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:39.170 10:42:05 -- common/autotest_common.sh@889 -- # local i 00:16:39.170 10:42:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:39.170 10:42:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:39.170 10:42:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.428 10:42:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:39.686 [ 00:16:39.686 { 00:16:39.686 "name": "BaseBdev1", 00:16:39.686 "aliases": [ 00:16:39.686 "2509088b-c833-442a-9088-3610333df0f6" 00:16:39.686 ], 00:16:39.686 "product_name": "Malloc disk", 00:16:39.686 "block_size": 512, 00:16:39.686 "num_blocks": 65536, 00:16:39.686 "uuid": "2509088b-c833-442a-9088-3610333df0f6", 00:16:39.686 "assigned_rate_limits": { 00:16:39.686 "rw_ios_per_sec": 0, 00:16:39.686 "rw_mbytes_per_sec": 0, 00:16:39.686 "r_mbytes_per_sec": 0, 00:16:39.686 "w_mbytes_per_sec": 0 00:16:39.686 }, 00:16:39.686 "claimed": true, 00:16:39.686 "claim_type": "exclusive_write", 00:16:39.686 "zoned": false, 00:16:39.686 "supported_io_types": { 00:16:39.686 "read": true, 00:16:39.686 "write": true, 00:16:39.686 "unmap": true, 00:16:39.686 "write_zeroes": true, 00:16:39.686 "flush": true, 00:16:39.686 "reset": true, 00:16:39.686 "compare": false, 00:16:39.686 "compare_and_write": false, 00:16:39.686 "abort": true, 00:16:39.686 "nvme_admin": false, 00:16:39.686 "nvme_io": false 00:16:39.686 }, 00:16:39.686 "memory_domains": [ 00:16:39.686 { 00:16:39.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.686 "dma_device_type": 2 00:16:39.686 } 00:16:39.686 ], 00:16:39.686 "driver_specific": {} 00:16:39.686 } 00:16:39.686 ] 00:16:39.686 10:42:06 -- common/autotest_common.sh@895 -- # return 0 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.686 10:42:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.944 10:42:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.944 "name": "Existed_Raid", 00:16:39.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.944 "strip_size_kb": 0, 00:16:39.944 "state": "configuring", 00:16:39.944 "raid_level": "raid1", 00:16:39.944 "superblock": false, 00:16:39.944 "num_base_bdevs": 3, 00:16:39.944 "num_base_bdevs_discovered": 1, 00:16:39.944 "num_base_bdevs_operational": 3, 00:16:39.944 "base_bdevs_list": [ 00:16:39.944 { 00:16:39.944 "name": "BaseBdev1", 00:16:39.944 "uuid": "2509088b-c833-442a-9088-3610333df0f6", 00:16:39.944 "is_configured": true, 00:16:39.944 "data_offset": 0, 00:16:39.944 "data_size": 65536 00:16:39.944 }, 00:16:39.944 { 00:16:39.944 "name": "BaseBdev2", 00:16:39.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.944 "is_configured": false, 00:16:39.944 "data_offset": 0, 00:16:39.944 "data_size": 0 00:16:39.944 }, 00:16:39.944 { 00:16:39.944 "name": "BaseBdev3", 00:16:39.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.944 "is_configured": false, 00:16:39.944 "data_offset": 0, 00:16:39.944 "data_size": 0 00:16:39.944 } 00:16:39.944 ] 00:16:39.944 }' 00:16:39.944 10:42:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.944 10:42:06 -- common/autotest_common.sh@10 -- # set +x 00:16:40.876 10:42:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:40.876 [2024-07-24 10:42:07.410359] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:40.876 [2024-07-24 10:42:07.410714] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:40.876 10:42:07 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:40.876 10:42:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:41.134 [2024-07-24 10:42:07.634570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.134 [2024-07-24 10:42:07.637519] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.134 [2024-07-24 10:42:07.637745] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.134 [2024-07-24 10:42:07.637857] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.134 [2024-07-24 10:42:07.638009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.134 10:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.392 10:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:41.392 "name": "Existed_Raid", 00:16:41.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.392 "strip_size_kb": 0, 00:16:41.392 "state": "configuring", 00:16:41.392 "raid_level": "raid1", 00:16:41.392 "superblock": false, 00:16:41.392 "num_base_bdevs": 3, 00:16:41.392 "num_base_bdevs_discovered": 1, 00:16:41.392 "num_base_bdevs_operational": 3, 00:16:41.392 "base_bdevs_list": [ 00:16:41.392 { 00:16:41.392 "name": "BaseBdev1", 00:16:41.392 "uuid": "2509088b-c833-442a-9088-3610333df0f6", 00:16:41.392 "is_configured": true, 00:16:41.392 "data_offset": 0, 00:16:41.392 "data_size": 65536 00:16:41.392 }, 00:16:41.392 { 00:16:41.392 "name": "BaseBdev2", 00:16:41.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.392 "is_configured": false, 00:16:41.392 "data_offset": 0, 00:16:41.392 "data_size": 0 00:16:41.392 }, 00:16:41.392 { 00:16:41.392 "name": "BaseBdev3", 00:16:41.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.392 "is_configured": false, 00:16:41.392 "data_offset": 0, 00:16:41.392 "data_size": 0 00:16:41.392 } 00:16:41.392 ] 00:16:41.392 }' 00:16:41.392 10:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:41.392 10:42:07 -- common/autotest_common.sh@10 -- # set +x 00:16:41.957 10:42:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:42.216 [2024-07-24 10:42:08.817129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:42.216 BaseBdev2 00:16:42.216 10:42:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:42.216 10:42:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:42.216 10:42:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:42.216 10:42:08 -- common/autotest_common.sh@889 -- # local i 00:16:42.216 10:42:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:42.216 10:42:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:42.216 10:42:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:42.474 10:42:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:42.741 [ 00:16:42.741 { 00:16:42.741 "name": "BaseBdev2", 00:16:42.741 "aliases": [ 00:16:42.741 "5d3c3410-27c0-4b4f-b735-1a22b85683cf" 00:16:42.741 ], 00:16:42.741 "product_name": "Malloc disk", 00:16:42.741 "block_size": 512, 00:16:42.741 "num_blocks": 65536, 00:16:42.741 "uuid": "5d3c3410-27c0-4b4f-b735-1a22b85683cf", 00:16:42.741 "assigned_rate_limits": { 00:16:42.741 "rw_ios_per_sec": 0, 00:16:42.741 "rw_mbytes_per_sec": 0, 00:16:42.741 "r_mbytes_per_sec": 0, 00:16:42.741 "w_mbytes_per_sec": 0 00:16:42.742 }, 00:16:42.742 "claimed": true, 00:16:42.742 "claim_type": "exclusive_write", 00:16:42.742 "zoned": false, 00:16:42.742 "supported_io_types": { 00:16:42.742 "read": true, 00:16:42.742 "write": true, 00:16:42.742 "unmap": true, 00:16:42.742 "write_zeroes": true, 00:16:42.742 "flush": true, 00:16:42.742 "reset": true, 00:16:42.742 "compare": false, 00:16:42.742 "compare_and_write": false, 00:16:42.742 "abort": true, 00:16:42.742 "nvme_admin": false, 00:16:42.742 "nvme_io": false 00:16:42.742 }, 00:16:42.742 "memory_domains": [ 00:16:42.742 { 00:16:42.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.742 "dma_device_type": 2 00:16:42.742 } 00:16:42.742 ], 00:16:42.742 "driver_specific": {} 00:16:42.742 } 00:16:42.742 ] 00:16:42.742 10:42:09 -- common/autotest_common.sh@895 -- # return 0 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.742 10:42:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.020 10:42:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.020 "name": "Existed_Raid", 00:16:43.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.020 "strip_size_kb": 0, 00:16:43.020 "state": "configuring", 00:16:43.020 "raid_level": "raid1", 00:16:43.020 "superblock": false, 00:16:43.020 "num_base_bdevs": 3, 00:16:43.020 "num_base_bdevs_discovered": 2, 00:16:43.020 "num_base_bdevs_operational": 3, 00:16:43.020 "base_bdevs_list": [ 00:16:43.020 { 00:16:43.020 "name": "BaseBdev1", 00:16:43.020 "uuid": "2509088b-c833-442a-9088-3610333df0f6", 00:16:43.020 "is_configured": true, 00:16:43.020 "data_offset": 0, 00:16:43.020 "data_size": 65536 00:16:43.020 }, 00:16:43.020 { 00:16:43.020 "name": "BaseBdev2", 00:16:43.020 "uuid": "5d3c3410-27c0-4b4f-b735-1a22b85683cf", 00:16:43.020 "is_configured": true, 00:16:43.020 "data_offset": 0, 00:16:43.020 "data_size": 65536 00:16:43.020 }, 00:16:43.020 { 00:16:43.020 "name": "BaseBdev3", 00:16:43.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.020 "is_configured": false, 00:16:43.020 "data_offset": 0, 00:16:43.020 "data_size": 0 00:16:43.020 } 00:16:43.020 ] 00:16:43.020 }' 00:16:43.020 10:42:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.020 10:42:09 -- common/autotest_common.sh@10 -- # set +x 00:16:43.586 10:42:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:43.844 [2024-07-24 10:42:10.526329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.844 [2024-07-24 10:42:10.528519] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:16:43.844 [2024-07-24 10:42:10.528966] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:43.844 [2024-07-24 10:42:10.529674] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:16:44.102 [2024-07-24 10:42:10.531025] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:16:44.102 [2024-07-24 10:42:10.531319] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:16:44.102 [2024-07-24 10:42:10.532316] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.102 BaseBdev3 00:16:44.102 10:42:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:44.102 10:42:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:44.102 10:42:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.102 10:42:10 -- common/autotest_common.sh@889 -- # local i 00:16:44.102 10:42:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.102 10:42:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.102 10:42:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.360 10:42:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:44.360 [ 00:16:44.360 { 00:16:44.360 "name": "BaseBdev3", 00:16:44.360 "aliases": [ 00:16:44.360 "f818ea88-6918-4f25-b108-4decb473b41e" 00:16:44.360 ], 00:16:44.360 "product_name": "Malloc disk", 00:16:44.360 "block_size": 512, 00:16:44.360 "num_blocks": 65536, 00:16:44.360 "uuid": "f818ea88-6918-4f25-b108-4decb473b41e", 00:16:44.360 "assigned_rate_limits": { 00:16:44.360 "rw_ios_per_sec": 0, 00:16:44.360 "rw_mbytes_per_sec": 0, 00:16:44.360 "r_mbytes_per_sec": 0, 00:16:44.360 "w_mbytes_per_sec": 0 00:16:44.360 }, 00:16:44.360 "claimed": true, 00:16:44.360 "claim_type": "exclusive_write", 00:16:44.360 "zoned": false, 00:16:44.360 "supported_io_types": { 00:16:44.360 "read": true, 00:16:44.360 "write": true, 00:16:44.360 "unmap": true, 00:16:44.360 "write_zeroes": true, 00:16:44.360 "flush": true, 00:16:44.360 "reset": true, 00:16:44.360 "compare": false, 00:16:44.360 "compare_and_write": false, 00:16:44.360 "abort": true, 00:16:44.360 "nvme_admin": false, 00:16:44.360 "nvme_io": false 00:16:44.360 }, 00:16:44.360 "memory_domains": [ 00:16:44.360 { 00:16:44.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.360 "dma_device_type": 2 00:16:44.360 } 00:16:44.360 ], 00:16:44.360 "driver_specific": {} 00:16:44.360 } 00:16:44.360 ] 00:16:44.619 10:42:11 -- common/autotest_common.sh@895 -- # return 0 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.619 10:42:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.877 10:42:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:44.877 "name": "Existed_Raid", 00:16:44.877 "uuid": "f9769a49-0bd1-402a-ac9a-cda5adcfdcb5", 00:16:44.877 "strip_size_kb": 0, 00:16:44.877 "state": "online", 00:16:44.877 "raid_level": "raid1", 00:16:44.877 "superblock": false, 00:16:44.877 "num_base_bdevs": 3, 00:16:44.877 "num_base_bdevs_discovered": 3, 00:16:44.877 "num_base_bdevs_operational": 3, 00:16:44.877 "base_bdevs_list": [ 00:16:44.877 { 00:16:44.877 "name": "BaseBdev1", 00:16:44.877 "uuid": "2509088b-c833-442a-9088-3610333df0f6", 00:16:44.877 "is_configured": true, 00:16:44.877 "data_offset": 0, 00:16:44.877 "data_size": 65536 00:16:44.877 }, 00:16:44.877 { 00:16:44.877 "name": "BaseBdev2", 00:16:44.877 "uuid": "5d3c3410-27c0-4b4f-b735-1a22b85683cf", 00:16:44.878 "is_configured": true, 00:16:44.878 "data_offset": 0, 00:16:44.878 "data_size": 65536 00:16:44.878 }, 00:16:44.878 { 00:16:44.878 "name": "BaseBdev3", 00:16:44.878 "uuid": "f818ea88-6918-4f25-b108-4decb473b41e", 00:16:44.878 "is_configured": true, 00:16:44.878 "data_offset": 0, 00:16:44.878 "data_size": 65536 00:16:44.878 } 00:16:44.878 ] 00:16:44.878 }' 00:16:44.878 10:42:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:44.878 10:42:11 -- common/autotest_common.sh@10 -- # set +x 00:16:45.444 10:42:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:45.702 [2024-07-24 10:42:12.200256] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.702 10:42:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.961 10:42:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.961 "name": "Existed_Raid", 00:16:45.961 "uuid": "f9769a49-0bd1-402a-ac9a-cda5adcfdcb5", 00:16:45.961 "strip_size_kb": 0, 00:16:45.961 "state": "online", 00:16:45.961 "raid_level": "raid1", 00:16:45.961 "superblock": false, 00:16:45.961 "num_base_bdevs": 3, 00:16:45.961 "num_base_bdevs_discovered": 2, 00:16:45.961 "num_base_bdevs_operational": 2, 00:16:45.961 "base_bdevs_list": [ 00:16:45.961 { 00:16:45.961 "name": null, 00:16:45.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.961 "is_configured": false, 00:16:45.961 "data_offset": 0, 00:16:45.961 "data_size": 65536 00:16:45.961 }, 00:16:45.961 { 00:16:45.961 "name": "BaseBdev2", 00:16:45.961 "uuid": "5d3c3410-27c0-4b4f-b735-1a22b85683cf", 00:16:45.961 "is_configured": true, 00:16:45.961 "data_offset": 0, 00:16:45.961 "data_size": 65536 00:16:45.961 }, 00:16:45.961 { 00:16:45.961 "name": "BaseBdev3", 00:16:45.961 "uuid": "f818ea88-6918-4f25-b108-4decb473b41e", 00:16:45.961 "is_configured": true, 00:16:45.961 "data_offset": 0, 00:16:45.961 "data_size": 65536 00:16:45.961 } 00:16:45.961 ] 00:16:45.961 }' 00:16:45.961 10:42:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.961 10:42:12 -- common/autotest_common.sh@10 -- # set +x 00:16:46.527 10:42:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:46.527 10:42:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:46.527 10:42:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.527 10:42:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:46.785 10:42:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:46.785 10:42:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.785 10:42:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:47.105 [2024-07-24 10:42:13.699317] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.105 10:42:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:47.105 10:42:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.105 10:42:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.105 10:42:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:47.364 10:42:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:47.364 10:42:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.364 10:42:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:47.622 [2024-07-24 10:42:14.206359] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.622 [2024-07-24 10:42:14.206660] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.622 [2024-07-24 10:42:14.206872] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.622 [2024-07-24 10:42:14.226618] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.622 [2024-07-24 10:42:14.227010] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:16:47.622 10:42:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:47.622 10:42:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.622 10:42:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.622 10:42:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:47.880 10:42:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:47.880 10:42:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:47.880 10:42:14 -- bdev/bdev_raid.sh@287 -- # killprocess 127888 00:16:47.880 10:42:14 -- common/autotest_common.sh@926 -- # '[' -z 127888 ']' 00:16:47.880 10:42:14 -- common/autotest_common.sh@930 -- # kill -0 127888 00:16:47.881 10:42:14 -- common/autotest_common.sh@931 -- # uname 00:16:47.881 10:42:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:47.881 10:42:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127888 00:16:48.139 killing process with pid 127888 00:16:48.139 10:42:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:48.139 10:42:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:48.139 10:42:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127888' 00:16:48.139 10:42:14 -- common/autotest_common.sh@945 -- # kill 127888 00:16:48.139 10:42:14 -- common/autotest_common.sh@950 -- # wait 127888 00:16:48.139 [2024-07-24 10:42:14.570000] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.139 [2024-07-24 10:42:14.570141] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:48.397 00:16:48.397 real 0m12.071s 00:16:48.397 user 0m21.976s 00:16:48.397 sys 0m1.584s 00:16:48.397 ************************************ 00:16:48.397 10:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.397 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 END TEST raid_state_function_test 00:16:48.397 ************************************ 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:48.397 10:42:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:48.397 10:42:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:48.397 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 ************************************ 00:16:48.397 START TEST raid_state_function_test_sb 00:16:48.397 ************************************ 00:16:48.397 10:42:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=128272 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 128272' 00:16:48.397 Process raid pid: 128272 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 128272 /var/tmp/spdk-raid.sock 00:16:48.397 10:42:14 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:48.397 10:42:14 -- common/autotest_common.sh@819 -- # '[' -z 128272 ']' 00:16:48.397 10:42:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:48.397 10:42:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:48.397 10:42:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:48.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:48.397 10:42:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:48.397 10:42:14 -- common/autotest_common.sh@10 -- # set +x 00:16:48.397 [2024-07-24 10:42:15.032018] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:48.397 [2024-07-24 10:42:15.032540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.656 [2024-07-24 10:42:15.180993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.656 [2024-07-24 10:42:15.281337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.656 [2024-07-24 10:42:15.338480] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.590 10:42:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.590 10:42:15 -- common/autotest_common.sh@852 -- # return 0 00:16:49.590 10:42:15 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:49.590 [2024-07-24 10:42:16.188002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.590 [2024-07-24 10:42:16.188401] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.590 [2024-07-24 10:42:16.188526] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.590 [2024-07-24 10:42:16.188593] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.590 [2024-07-24 10:42:16.188700] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.590 [2024-07-24 10:42:16.188796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.590 10:42:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.848 10:42:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:49.848 "name": "Existed_Raid", 00:16:49.848 "uuid": "cbcea87d-8751-4855-9cd8-e5622c52271a", 00:16:49.848 "strip_size_kb": 0, 00:16:49.848 "state": "configuring", 00:16:49.848 "raid_level": "raid1", 00:16:49.848 "superblock": true, 00:16:49.848 "num_base_bdevs": 3, 00:16:49.848 "num_base_bdevs_discovered": 0, 00:16:49.848 "num_base_bdevs_operational": 3, 00:16:49.848 "base_bdevs_list": [ 00:16:49.848 { 00:16:49.848 "name": "BaseBdev1", 00:16:49.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.848 "is_configured": false, 00:16:49.848 "data_offset": 0, 00:16:49.848 "data_size": 0 00:16:49.848 }, 00:16:49.848 { 00:16:49.848 "name": "BaseBdev2", 00:16:49.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.848 "is_configured": false, 00:16:49.848 "data_offset": 0, 00:16:49.848 "data_size": 0 00:16:49.848 }, 00:16:49.848 { 00:16:49.848 "name": "BaseBdev3", 00:16:49.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.848 "is_configured": false, 00:16:49.848 "data_offset": 0, 00:16:49.848 "data_size": 0 00:16:49.848 } 00:16:49.848 ] 00:16:49.848 }' 00:16:49.848 10:42:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:49.848 10:42:16 -- common/autotest_common.sh@10 -- # set +x 00:16:50.808 10:42:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:50.808 [2024-07-24 10:42:17.320144] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.808 [2024-07-24 10:42:17.320458] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:16:50.808 10:42:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:51.066 [2024-07-24 10:42:17.580343] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:51.066 [2024-07-24 10:42:17.580760] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:51.066 [2024-07-24 10:42:17.580889] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:51.066 [2024-07-24 10:42:17.580964] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:51.066 [2024-07-24 10:42:17.581084] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:51.066 [2024-07-24 10:42:17.581159] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:51.066 10:42:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:51.325 [2024-07-24 10:42:17.871836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.325 BaseBdev1 00:16:51.325 10:42:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:51.325 10:42:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:51.325 10:42:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:51.325 10:42:17 -- common/autotest_common.sh@889 -- # local i 00:16:51.325 10:42:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:51.325 10:42:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:51.325 10:42:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.583 10:42:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.841 [ 00:16:51.841 { 00:16:51.841 "name": "BaseBdev1", 00:16:51.841 "aliases": [ 00:16:51.841 "184d7e7f-8776-47d0-8484-1ee8a5a57a62" 00:16:51.841 ], 00:16:51.841 "product_name": "Malloc disk", 00:16:51.841 "block_size": 512, 00:16:51.841 "num_blocks": 65536, 00:16:51.841 "uuid": "184d7e7f-8776-47d0-8484-1ee8a5a57a62", 00:16:51.841 "assigned_rate_limits": { 00:16:51.841 "rw_ios_per_sec": 0, 00:16:51.841 "rw_mbytes_per_sec": 0, 00:16:51.841 "r_mbytes_per_sec": 0, 00:16:51.841 "w_mbytes_per_sec": 0 00:16:51.841 }, 00:16:51.841 "claimed": true, 00:16:51.841 "claim_type": "exclusive_write", 00:16:51.841 "zoned": false, 00:16:51.841 "supported_io_types": { 00:16:51.841 "read": true, 00:16:51.841 "write": true, 00:16:51.841 "unmap": true, 00:16:51.841 "write_zeroes": true, 00:16:51.841 "flush": true, 00:16:51.841 "reset": true, 00:16:51.841 "compare": false, 00:16:51.841 "compare_and_write": false, 00:16:51.841 "abort": true, 00:16:51.841 "nvme_admin": false, 00:16:51.841 "nvme_io": false 00:16:51.841 }, 00:16:51.841 "memory_domains": [ 00:16:51.841 { 00:16:51.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.841 "dma_device_type": 2 00:16:51.841 } 00:16:51.841 ], 00:16:51.841 "driver_specific": {} 00:16:51.841 } 00:16:51.841 ] 00:16:51.841 10:42:18 -- common/autotest_common.sh@895 -- # return 0 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.841 10:42:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.099 10:42:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.099 "name": "Existed_Raid", 00:16:52.099 "uuid": "c12fe6a9-a58d-4e77-963b-4c26d88f5899", 00:16:52.099 "strip_size_kb": 0, 00:16:52.099 "state": "configuring", 00:16:52.099 "raid_level": "raid1", 00:16:52.099 "superblock": true, 00:16:52.099 "num_base_bdevs": 3, 00:16:52.099 "num_base_bdevs_discovered": 1, 00:16:52.099 "num_base_bdevs_operational": 3, 00:16:52.099 "base_bdevs_list": [ 00:16:52.099 { 00:16:52.099 "name": "BaseBdev1", 00:16:52.099 "uuid": "184d7e7f-8776-47d0-8484-1ee8a5a57a62", 00:16:52.099 "is_configured": true, 00:16:52.099 "data_offset": 2048, 00:16:52.099 "data_size": 63488 00:16:52.099 }, 00:16:52.099 { 00:16:52.099 "name": "BaseBdev2", 00:16:52.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.099 "is_configured": false, 00:16:52.099 "data_offset": 0, 00:16:52.099 "data_size": 0 00:16:52.099 }, 00:16:52.099 { 00:16:52.099 "name": "BaseBdev3", 00:16:52.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.099 "is_configured": false, 00:16:52.099 "data_offset": 0, 00:16:52.099 "data_size": 0 00:16:52.099 } 00:16:52.099 ] 00:16:52.099 }' 00:16:52.099 10:42:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.099 10:42:18 -- common/autotest_common.sh@10 -- # set +x 00:16:52.665 10:42:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.924 [2024-07-24 10:42:19.560379] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.924 [2024-07-24 10:42:19.560764] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:16:52.924 10:42:19 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:52.924 10:42:19 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:53.182 10:42:19 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:53.440 BaseBdev1 00:16:53.440 10:42:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:53.440 10:42:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:53.440 10:42:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:53.440 10:42:20 -- common/autotest_common.sh@889 -- # local i 00:16:53.440 10:42:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:53.440 10:42:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:53.440 10:42:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.698 10:42:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:53.955 [ 00:16:53.955 { 00:16:53.955 "name": "BaseBdev1", 00:16:53.955 "aliases": [ 00:16:53.955 "674a0365-5f7e-4a9b-8405-ebaec9a1693f" 00:16:53.955 ], 00:16:53.955 "product_name": "Malloc disk", 00:16:53.955 "block_size": 512, 00:16:53.955 "num_blocks": 65536, 00:16:53.955 "uuid": "674a0365-5f7e-4a9b-8405-ebaec9a1693f", 00:16:53.955 "assigned_rate_limits": { 00:16:53.955 "rw_ios_per_sec": 0, 00:16:53.955 "rw_mbytes_per_sec": 0, 00:16:53.955 "r_mbytes_per_sec": 0, 00:16:53.955 "w_mbytes_per_sec": 0 00:16:53.955 }, 00:16:53.955 "claimed": false, 00:16:53.955 "zoned": false, 00:16:53.955 "supported_io_types": { 00:16:53.955 "read": true, 00:16:53.955 "write": true, 00:16:53.955 "unmap": true, 00:16:53.955 "write_zeroes": true, 00:16:53.955 "flush": true, 00:16:53.955 "reset": true, 00:16:53.955 "compare": false, 00:16:53.955 "compare_and_write": false, 00:16:53.955 "abort": true, 00:16:53.955 "nvme_admin": false, 00:16:53.955 "nvme_io": false 00:16:53.955 }, 00:16:53.955 "memory_domains": [ 00:16:53.955 { 00:16:53.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.955 "dma_device_type": 2 00:16:53.955 } 00:16:53.955 ], 00:16:53.955 "driver_specific": {} 00:16:53.955 } 00:16:53.955 ] 00:16:53.955 10:42:20 -- common/autotest_common.sh@895 -- # return 0 00:16:53.955 10:42:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.215 [2024-07-24 10:42:20.823772] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.215 [2024-07-24 10:42:20.826573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.215 [2024-07-24 10:42:20.826778] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.215 [2024-07-24 10:42:20.826902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.215 [2024-07-24 10:42:20.826976] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.215 10:42:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.471 10:42:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.471 "name": "Existed_Raid", 00:16:54.471 "uuid": "9716b890-52c1-450d-a8c3-763466695e21", 00:16:54.471 "strip_size_kb": 0, 00:16:54.471 "state": "configuring", 00:16:54.471 "raid_level": "raid1", 00:16:54.471 "superblock": true, 00:16:54.471 "num_base_bdevs": 3, 00:16:54.471 "num_base_bdevs_discovered": 1, 00:16:54.471 "num_base_bdevs_operational": 3, 00:16:54.471 "base_bdevs_list": [ 00:16:54.471 { 00:16:54.471 "name": "BaseBdev1", 00:16:54.471 "uuid": "674a0365-5f7e-4a9b-8405-ebaec9a1693f", 00:16:54.471 "is_configured": true, 00:16:54.471 "data_offset": 2048, 00:16:54.471 "data_size": 63488 00:16:54.471 }, 00:16:54.471 { 00:16:54.471 "name": "BaseBdev2", 00:16:54.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.471 "is_configured": false, 00:16:54.471 "data_offset": 0, 00:16:54.471 "data_size": 0 00:16:54.471 }, 00:16:54.471 { 00:16:54.471 "name": "BaseBdev3", 00:16:54.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.471 "is_configured": false, 00:16:54.471 "data_offset": 0, 00:16:54.471 "data_size": 0 00:16:54.471 } 00:16:54.471 ] 00:16:54.471 }' 00:16:54.471 10:42:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.471 10:42:21 -- common/autotest_common.sh@10 -- # set +x 00:16:55.402 10:42:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:55.402 [2024-07-24 10:42:22.036711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.402 BaseBdev2 00:16:55.402 10:42:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:55.402 10:42:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:55.402 10:42:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:55.402 10:42:22 -- common/autotest_common.sh@889 -- # local i 00:16:55.402 10:42:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:55.402 10:42:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:55.402 10:42:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.660 10:42:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:55.917 [ 00:16:55.917 { 00:16:55.917 "name": "BaseBdev2", 00:16:55.917 "aliases": [ 00:16:55.917 "2631cb87-f3d6-4949-868f-a53d7b130036" 00:16:55.917 ], 00:16:55.917 "product_name": "Malloc disk", 00:16:55.917 "block_size": 512, 00:16:55.917 "num_blocks": 65536, 00:16:55.917 "uuid": "2631cb87-f3d6-4949-868f-a53d7b130036", 00:16:55.917 "assigned_rate_limits": { 00:16:55.917 "rw_ios_per_sec": 0, 00:16:55.917 "rw_mbytes_per_sec": 0, 00:16:55.917 "r_mbytes_per_sec": 0, 00:16:55.917 "w_mbytes_per_sec": 0 00:16:55.917 }, 00:16:55.917 "claimed": true, 00:16:55.917 "claim_type": "exclusive_write", 00:16:55.917 "zoned": false, 00:16:55.917 "supported_io_types": { 00:16:55.917 "read": true, 00:16:55.917 "write": true, 00:16:55.917 "unmap": true, 00:16:55.917 "write_zeroes": true, 00:16:55.917 "flush": true, 00:16:55.917 "reset": true, 00:16:55.917 "compare": false, 00:16:55.917 "compare_and_write": false, 00:16:55.917 "abort": true, 00:16:55.917 "nvme_admin": false, 00:16:55.917 "nvme_io": false 00:16:55.917 }, 00:16:55.917 "memory_domains": [ 00:16:55.917 { 00:16:55.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.917 "dma_device_type": 2 00:16:55.917 } 00:16:55.917 ], 00:16:55.917 "driver_specific": {} 00:16:55.917 } 00:16:55.917 ] 00:16:55.917 10:42:22 -- common/autotest_common.sh@895 -- # return 0 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.917 10:42:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.176 10:42:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.176 "name": "Existed_Raid", 00:16:56.176 "uuid": "9716b890-52c1-450d-a8c3-763466695e21", 00:16:56.176 "strip_size_kb": 0, 00:16:56.176 "state": "configuring", 00:16:56.176 "raid_level": "raid1", 00:16:56.176 "superblock": true, 00:16:56.176 "num_base_bdevs": 3, 00:16:56.176 "num_base_bdevs_discovered": 2, 00:16:56.176 "num_base_bdevs_operational": 3, 00:16:56.176 "base_bdevs_list": [ 00:16:56.176 { 00:16:56.176 "name": "BaseBdev1", 00:16:56.176 "uuid": "674a0365-5f7e-4a9b-8405-ebaec9a1693f", 00:16:56.176 "is_configured": true, 00:16:56.176 "data_offset": 2048, 00:16:56.176 "data_size": 63488 00:16:56.176 }, 00:16:56.176 { 00:16:56.176 "name": "BaseBdev2", 00:16:56.176 "uuid": "2631cb87-f3d6-4949-868f-a53d7b130036", 00:16:56.176 "is_configured": true, 00:16:56.176 "data_offset": 2048, 00:16:56.176 "data_size": 63488 00:16:56.176 }, 00:16:56.176 { 00:16:56.176 "name": "BaseBdev3", 00:16:56.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.176 "is_configured": false, 00:16:56.176 "data_offset": 0, 00:16:56.176 "data_size": 0 00:16:56.176 } 00:16:56.176 ] 00:16:56.176 }' 00:16:56.176 10:42:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.176 10:42:22 -- common/autotest_common.sh@10 -- # set +x 00:16:57.110 10:42:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.110 [2024-07-24 10:42:23.689590] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.110 [2024-07-24 10:42:23.690174] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:16:57.110 [2024-07-24 10:42:23.690318] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.110 [2024-07-24 10:42:23.690512] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:16:57.110 [2024-07-24 10:42:23.691086] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:16:57.110 [2024-07-24 10:42:23.691220] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:16:57.110 BaseBdev3 00:16:57.110 [2024-07-24 10:42:23.691567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.110 10:42:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:57.110 10:42:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:57.110 10:42:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:57.110 10:42:23 -- common/autotest_common.sh@889 -- # local i 00:16:57.110 10:42:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:57.110 10:42:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:57.110 10:42:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.368 10:42:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.626 [ 00:16:57.626 { 00:16:57.626 "name": "BaseBdev3", 00:16:57.626 "aliases": [ 00:16:57.626 "5c8602c6-2cf2-4535-93fb-ecde9f18f9da" 00:16:57.626 ], 00:16:57.626 "product_name": "Malloc disk", 00:16:57.626 "block_size": 512, 00:16:57.626 "num_blocks": 65536, 00:16:57.626 "uuid": "5c8602c6-2cf2-4535-93fb-ecde9f18f9da", 00:16:57.626 "assigned_rate_limits": { 00:16:57.626 "rw_ios_per_sec": 0, 00:16:57.626 "rw_mbytes_per_sec": 0, 00:16:57.626 "r_mbytes_per_sec": 0, 00:16:57.626 "w_mbytes_per_sec": 0 00:16:57.626 }, 00:16:57.626 "claimed": true, 00:16:57.626 "claim_type": "exclusive_write", 00:16:57.626 "zoned": false, 00:16:57.626 "supported_io_types": { 00:16:57.626 "read": true, 00:16:57.626 "write": true, 00:16:57.626 "unmap": true, 00:16:57.626 "write_zeroes": true, 00:16:57.626 "flush": true, 00:16:57.626 "reset": true, 00:16:57.626 "compare": false, 00:16:57.626 "compare_and_write": false, 00:16:57.626 "abort": true, 00:16:57.626 "nvme_admin": false, 00:16:57.626 "nvme_io": false 00:16:57.626 }, 00:16:57.626 "memory_domains": [ 00:16:57.626 { 00:16:57.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.626 "dma_device_type": 2 00:16:57.626 } 00:16:57.626 ], 00:16:57.626 "driver_specific": {} 00:16:57.626 } 00:16:57.626 ] 00:16:57.626 10:42:24 -- common/autotest_common.sh@895 -- # return 0 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.626 10:42:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.883 10:42:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.883 "name": "Existed_Raid", 00:16:57.883 "uuid": "9716b890-52c1-450d-a8c3-763466695e21", 00:16:57.883 "strip_size_kb": 0, 00:16:57.883 "state": "online", 00:16:57.883 "raid_level": "raid1", 00:16:57.883 "superblock": true, 00:16:57.883 "num_base_bdevs": 3, 00:16:57.883 "num_base_bdevs_discovered": 3, 00:16:57.883 "num_base_bdevs_operational": 3, 00:16:57.883 "base_bdevs_list": [ 00:16:57.883 { 00:16:57.883 "name": "BaseBdev1", 00:16:57.883 "uuid": "674a0365-5f7e-4a9b-8405-ebaec9a1693f", 00:16:57.883 "is_configured": true, 00:16:57.883 "data_offset": 2048, 00:16:57.883 "data_size": 63488 00:16:57.883 }, 00:16:57.883 { 00:16:57.883 "name": "BaseBdev2", 00:16:57.884 "uuid": "2631cb87-f3d6-4949-868f-a53d7b130036", 00:16:57.884 "is_configured": true, 00:16:57.884 "data_offset": 2048, 00:16:57.884 "data_size": 63488 00:16:57.884 }, 00:16:57.884 { 00:16:57.884 "name": "BaseBdev3", 00:16:57.884 "uuid": "5c8602c6-2cf2-4535-93fb-ecde9f18f9da", 00:16:57.884 "is_configured": true, 00:16:57.884 "data_offset": 2048, 00:16:57.884 "data_size": 63488 00:16:57.884 } 00:16:57.884 ] 00:16:57.884 }' 00:16:57.884 10:42:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.884 10:42:24 -- common/autotest_common.sh@10 -- # set +x 00:16:58.447 10:42:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:58.704 [2024-07-24 10:42:25.318400] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.704 10:42:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.962 10:42:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.962 "name": "Existed_Raid", 00:16:58.962 "uuid": "9716b890-52c1-450d-a8c3-763466695e21", 00:16:58.962 "strip_size_kb": 0, 00:16:58.962 "state": "online", 00:16:58.962 "raid_level": "raid1", 00:16:58.962 "superblock": true, 00:16:58.962 "num_base_bdevs": 3, 00:16:58.962 "num_base_bdevs_discovered": 2, 00:16:58.962 "num_base_bdevs_operational": 2, 00:16:58.962 "base_bdevs_list": [ 00:16:58.962 { 00:16:58.962 "name": null, 00:16:58.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.962 "is_configured": false, 00:16:58.962 "data_offset": 2048, 00:16:58.962 "data_size": 63488 00:16:58.962 }, 00:16:58.962 { 00:16:58.962 "name": "BaseBdev2", 00:16:58.962 "uuid": "2631cb87-f3d6-4949-868f-a53d7b130036", 00:16:58.962 "is_configured": true, 00:16:58.962 "data_offset": 2048, 00:16:58.962 "data_size": 63488 00:16:58.962 }, 00:16:58.962 { 00:16:58.962 "name": "BaseBdev3", 00:16:58.962 "uuid": "5c8602c6-2cf2-4535-93fb-ecde9f18f9da", 00:16:58.962 "is_configured": true, 00:16:58.962 "data_offset": 2048, 00:16:58.962 "data_size": 63488 00:16:58.962 } 00:16:58.962 ] 00:16:58.962 }' 00:16:58.962 10:42:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.962 10:42:25 -- common/autotest_common.sh@10 -- # set +x 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.895 10:42:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:00.153 [2024-07-24 10:42:26.781807] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.153 10:42:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:00.153 10:42:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.153 10:42:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.153 10:42:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.410 10:42:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.410 10:42:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.410 10:42:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:00.668 [2024-07-24 10:42:27.282141] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:00.668 [2024-07-24 10:42:27.282404] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.668 [2024-07-24 10:42:27.282639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.668 [2024-07-24 10:42:27.299895] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.668 [2024-07-24 10:42:27.300115] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:00.668 10:42:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:00.668 10:42:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.668 10:42:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.668 10:42:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:00.926 10:42:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:00.926 10:42:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:00.926 10:42:27 -- bdev/bdev_raid.sh@287 -- # killprocess 128272 00:17:00.926 10:42:27 -- common/autotest_common.sh@926 -- # '[' -z 128272 ']' 00:17:00.926 10:42:27 -- common/autotest_common.sh@930 -- # kill -0 128272 00:17:00.926 10:42:27 -- common/autotest_common.sh@931 -- # uname 00:17:00.926 10:42:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:00.926 10:42:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128272 00:17:00.926 killing process with pid 128272 00:17:00.926 10:42:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:00.926 10:42:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:00.926 10:42:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128272' 00:17:00.926 10:42:27 -- common/autotest_common.sh@945 -- # kill 128272 00:17:00.926 10:42:27 -- common/autotest_common.sh@950 -- # wait 128272 00:17:00.926 [2024-07-24 10:42:27.610669] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.926 [2024-07-24 10:42:27.610782] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.492 ************************************ 00:17:01.492 END TEST raid_state_function_test_sb 00:17:01.492 ************************************ 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:01.492 00:17:01.492 real 0m12.974s 00:17:01.492 user 0m23.533s 00:17:01.492 sys 0m1.779s 00:17:01.492 10:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.492 10:42:27 -- common/autotest_common.sh@10 -- # set +x 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:01.492 10:42:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:01.492 10:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.492 10:42:27 -- common/autotest_common.sh@10 -- # set +x 00:17:01.492 ************************************ 00:17:01.492 START TEST raid_superblock_test 00:17:01.492 ************************************ 00:17:01.492 10:42:27 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:01.492 10:42:27 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:01.493 10:42:27 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:01.493 10:42:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=128664 00:17:01.493 10:42:28 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:01.493 10:42:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 128664 /var/tmp/spdk-raid.sock 00:17:01.493 10:42:28 -- common/autotest_common.sh@819 -- # '[' -z 128664 ']' 00:17:01.493 10:42:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:01.493 10:42:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.493 10:42:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:01.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:01.493 10:42:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.493 10:42:28 -- common/autotest_common.sh@10 -- # set +x 00:17:01.493 [2024-07-24 10:42:28.051902] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:01.493 [2024-07-24 10:42:28.052412] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128664 ] 00:17:01.751 [2024-07-24 10:42:28.192888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.751 [2024-07-24 10:42:28.319860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.751 [2024-07-24 10:42:28.395152] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.684 10:42:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.684 10:42:29 -- common/autotest_common.sh@852 -- # return 0 00:17:02.684 10:42:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:02.684 10:42:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:02.684 10:42:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:02.684 10:42:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:02.684 10:42:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:02.684 10:42:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.685 10:42:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.685 10:42:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.685 10:42:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:02.685 malloc1 00:17:02.685 10:42:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.943 [2024-07-24 10:42:29.544180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.943 [2024-07-24 10:42:29.544495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.943 [2024-07-24 10:42:29.544703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:02.943 [2024-07-24 10:42:29.544892] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.943 [2024-07-24 10:42:29.548030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.943 [2024-07-24 10:42:29.548224] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.943 pt1 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:02.943 10:42:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:03.201 malloc2 00:17:03.201 10:42:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.459 [2024-07-24 10:42:30.063806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.459 [2024-07-24 10:42:30.064341] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.459 [2024-07-24 10:42:30.064584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:03.459 [2024-07-24 10:42:30.064826] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.459 [2024-07-24 10:42:30.068635] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.459 [2024-07-24 10:42:30.068884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.459 pt2 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.459 10:42:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:03.716 malloc3 00:17:03.716 10:42:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:03.974 [2024-07-24 10:42:30.570540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:03.974 [2024-07-24 10:42:30.570884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.974 [2024-07-24 10:42:30.571092] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:03.974 [2024-07-24 10:42:30.571251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.974 [2024-07-24 10:42:30.574144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.974 [2024-07-24 10:42:30.574328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:03.974 pt3 00:17:03.974 10:42:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:03.975 10:42:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.975 10:42:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:04.233 [2024-07-24 10:42:30.862910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:04.233 [2024-07-24 10:42:30.865786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.233 [2024-07-24 10:42:30.866035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:04.233 [2024-07-24 10:42:30.866443] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:04.233 [2024-07-24 10:42:30.866576] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.233 [2024-07-24 10:42:30.866837] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:17:04.233 [2024-07-24 10:42:30.867464] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:04.233 [2024-07-24 10:42:30.867609] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:17:04.233 [2024-07-24 10:42:30.867948] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.233 10:42:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.800 10:42:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:04.800 "name": "raid_bdev1", 00:17:04.800 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:04.800 "strip_size_kb": 0, 00:17:04.800 "state": "online", 00:17:04.800 "raid_level": "raid1", 00:17:04.800 "superblock": true, 00:17:04.800 "num_base_bdevs": 3, 00:17:04.800 "num_base_bdevs_discovered": 3, 00:17:04.800 "num_base_bdevs_operational": 3, 00:17:04.800 "base_bdevs_list": [ 00:17:04.800 { 00:17:04.800 "name": "pt1", 00:17:04.800 "uuid": "e95d05eb-ef1c-5b0d-ae2b-4c49c821f1a5", 00:17:04.800 "is_configured": true, 00:17:04.800 "data_offset": 2048, 00:17:04.800 "data_size": 63488 00:17:04.800 }, 00:17:04.800 { 00:17:04.800 "name": "pt2", 00:17:04.800 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:04.800 "is_configured": true, 00:17:04.800 "data_offset": 2048, 00:17:04.800 "data_size": 63488 00:17:04.800 }, 00:17:04.800 { 00:17:04.800 "name": "pt3", 00:17:04.800 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:04.800 "is_configured": true, 00:17:04.800 "data_offset": 2048, 00:17:04.800 "data_size": 63488 00:17:04.800 } 00:17:04.800 ] 00:17:04.800 }' 00:17:04.800 10:42:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:04.800 10:42:31 -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 10:42:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:05.367 10:42:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.625 [2024-07-24 10:42:32.136603] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.625 10:42:32 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=cbfa3f5c-6192-4e8b-b700-d50ed14eea17 00:17:05.625 10:42:32 -- bdev/bdev_raid.sh@380 -- # '[' -z cbfa3f5c-6192-4e8b-b700-d50ed14eea17 ']' 00:17:05.626 10:42:32 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:05.884 [2024-07-24 10:42:32.376382] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:05.884 [2024-07-24 10:42:32.376753] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.884 [2024-07-24 10:42:32.377047] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.884 [2024-07-24 10:42:32.377305] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.884 [2024-07-24 10:42:32.377437] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:17:05.884 10:42:32 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.884 10:42:32 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:06.150 10:42:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:06.150 10:42:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:06.150 10:42:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.150 10:42:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:06.407 10:42:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.407 10:42:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:06.665 10:42:33 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.665 10:42:33 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:06.924 10:42:33 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:06.924 10:42:33 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:07.182 10:42:33 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:07.182 10:42:33 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.182 10:42:33 -- common/autotest_common.sh@640 -- # local es=0 00:17:07.182 10:42:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.182 10:42:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.182 10:42:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.182 10:42:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.182 10:42:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.182 10:42:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.182 10:42:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:07.182 10:42:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.182 10:42:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:07.182 10:42:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.441 [2024-07-24 10:42:33.908717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:07.441 [2024-07-24 10:42:33.911595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:07.441 [2024-07-24 10:42:33.911797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:07.441 [2024-07-24 10:42:33.912022] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:07.441 [2024-07-24 10:42:33.912261] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:07.441 [2024-07-24 10:42:33.912444] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:07.441 [2024-07-24 10:42:33.912622] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.441 [2024-07-24 10:42:33.912743] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:17:07.441 request: 00:17:07.441 { 00:17:07.441 "name": "raid_bdev1", 00:17:07.441 "raid_level": "raid1", 00:17:07.441 "base_bdevs": [ 00:17:07.441 "malloc1", 00:17:07.441 "malloc2", 00:17:07.441 "malloc3" 00:17:07.441 ], 00:17:07.441 "superblock": false, 00:17:07.441 "method": "bdev_raid_create", 00:17:07.441 "req_id": 1 00:17:07.441 } 00:17:07.441 Got JSON-RPC error response 00:17:07.441 response: 00:17:07.441 { 00:17:07.441 "code": -17, 00:17:07.441 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:07.441 } 00:17:07.441 10:42:33 -- common/autotest_common.sh@643 -- # es=1 00:17:07.441 10:42:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:07.441 10:42:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:07.441 10:42:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:07.441 10:42:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.441 10:42:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:07.700 10:42:34 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:07.700 10:42:34 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:07.700 10:42:34 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.700 [2024-07-24 10:42:34.377266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.700 [2024-07-24 10:42:34.377651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.700 [2024-07-24 10:42:34.377747] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:07.700 [2024-07-24 10:42:34.378002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.700 [2024-07-24 10:42:34.380948] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.700 [2024-07-24 10:42:34.381124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.700 [2024-07-24 10:42:34.381368] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:07.700 [2024-07-24 10:42:34.381537] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.700 pt1 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.958 "name": "raid_bdev1", 00:17:07.958 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:07.958 "strip_size_kb": 0, 00:17:07.958 "state": "configuring", 00:17:07.958 "raid_level": "raid1", 00:17:07.958 "superblock": true, 00:17:07.958 "num_base_bdevs": 3, 00:17:07.958 "num_base_bdevs_discovered": 1, 00:17:07.958 "num_base_bdevs_operational": 3, 00:17:07.958 "base_bdevs_list": [ 00:17:07.958 { 00:17:07.958 "name": "pt1", 00:17:07.958 "uuid": "e95d05eb-ef1c-5b0d-ae2b-4c49c821f1a5", 00:17:07.958 "is_configured": true, 00:17:07.958 "data_offset": 2048, 00:17:07.958 "data_size": 63488 00:17:07.958 }, 00:17:07.958 { 00:17:07.958 "name": null, 00:17:07.958 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:07.958 "is_configured": false, 00:17:07.958 "data_offset": 2048, 00:17:07.958 "data_size": 63488 00:17:07.958 }, 00:17:07.958 { 00:17:07.958 "name": null, 00:17:07.958 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:07.958 "is_configured": false, 00:17:07.958 "data_offset": 2048, 00:17:07.958 "data_size": 63488 00:17:07.958 } 00:17:07.958 ] 00:17:07.958 }' 00:17:07.958 10:42:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.958 10:42:34 -- common/autotest_common.sh@10 -- # set +x 00:17:08.904 10:42:35 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:08.904 10:42:35 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.162 [2024-07-24 10:42:35.637859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.162 [2024-07-24 10:42:35.638297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.162 [2024-07-24 10:42:35.638480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:09.162 [2024-07-24 10:42:35.638642] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.162 [2024-07-24 10:42:35.639336] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.162 [2024-07-24 10:42:35.639528] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.162 [2024-07-24 10:42:35.639786] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:09.162 [2024-07-24 10:42:35.639929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.162 pt2 00:17:09.162 10:42:35 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:09.421 [2024-07-24 10:42:35.878055] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.421 10:42:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.679 10:42:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.679 "name": "raid_bdev1", 00:17:09.679 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:09.679 "strip_size_kb": 0, 00:17:09.679 "state": "configuring", 00:17:09.679 "raid_level": "raid1", 00:17:09.679 "superblock": true, 00:17:09.679 "num_base_bdevs": 3, 00:17:09.679 "num_base_bdevs_discovered": 1, 00:17:09.679 "num_base_bdevs_operational": 3, 00:17:09.679 "base_bdevs_list": [ 00:17:09.679 { 00:17:09.679 "name": "pt1", 00:17:09.679 "uuid": "e95d05eb-ef1c-5b0d-ae2b-4c49c821f1a5", 00:17:09.679 "is_configured": true, 00:17:09.679 "data_offset": 2048, 00:17:09.679 "data_size": 63488 00:17:09.679 }, 00:17:09.679 { 00:17:09.679 "name": null, 00:17:09.679 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:09.679 "is_configured": false, 00:17:09.679 "data_offset": 2048, 00:17:09.679 "data_size": 63488 00:17:09.679 }, 00:17:09.679 { 00:17:09.679 "name": null, 00:17:09.679 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:09.679 "is_configured": false, 00:17:09.679 "data_offset": 2048, 00:17:09.679 "data_size": 63488 00:17:09.679 } 00:17:09.679 ] 00:17:09.679 }' 00:17:09.679 10:42:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.679 10:42:36 -- common/autotest_common.sh@10 -- # set +x 00:17:10.245 10:42:36 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:10.245 10:42:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.246 10:42:36 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.504 [2024-07-24 10:42:37.086236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.504 [2024-07-24 10:42:37.086761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.504 [2024-07-24 10:42:37.086933] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:10.504 [2024-07-24 10:42:37.087075] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.504 [2024-07-24 10:42:37.087709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.504 [2024-07-24 10:42:37.087912] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.504 [2024-07-24 10:42:37.088150] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:10.504 [2024-07-24 10:42:37.088293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.504 pt2 00:17:10.504 10:42:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:10.504 10:42:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.504 10:42:37 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:10.763 [2024-07-24 10:42:37.390333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:10.763 [2024-07-24 10:42:37.390739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.763 [2024-07-24 10:42:37.390908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:10.763 [2024-07-24 10:42:37.391041] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.763 [2024-07-24 10:42:37.391681] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.763 [2024-07-24 10:42:37.391866] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:10.763 [2024-07-24 10:42:37.392127] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:10.763 [2024-07-24 10:42:37.392268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:10.763 [2024-07-24 10:42:37.392612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:10.763 [2024-07-24 10:42:37.392751] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:10.763 [2024-07-24 10:42:37.392883] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:17:10.763 [2024-07-24 10:42:37.393385] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:10.763 [2024-07-24 10:42:37.393537] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:10.763 [2024-07-24 10:42:37.393764] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.763 pt3 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.763 10:42:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.022 10:42:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.022 "name": "raid_bdev1", 00:17:11.022 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:11.022 "strip_size_kb": 0, 00:17:11.022 "state": "online", 00:17:11.022 "raid_level": "raid1", 00:17:11.022 "superblock": true, 00:17:11.022 "num_base_bdevs": 3, 00:17:11.022 "num_base_bdevs_discovered": 3, 00:17:11.022 "num_base_bdevs_operational": 3, 00:17:11.022 "base_bdevs_list": [ 00:17:11.022 { 00:17:11.022 "name": "pt1", 00:17:11.022 "uuid": "e95d05eb-ef1c-5b0d-ae2b-4c49c821f1a5", 00:17:11.022 "is_configured": true, 00:17:11.022 "data_offset": 2048, 00:17:11.022 "data_size": 63488 00:17:11.022 }, 00:17:11.022 { 00:17:11.022 "name": "pt2", 00:17:11.022 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:11.022 "is_configured": true, 00:17:11.022 "data_offset": 2048, 00:17:11.022 "data_size": 63488 00:17:11.022 }, 00:17:11.022 { 00:17:11.022 "name": "pt3", 00:17:11.022 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:11.022 "is_configured": true, 00:17:11.022 "data_offset": 2048, 00:17:11.022 "data_size": 63488 00:17:11.022 } 00:17:11.022 ] 00:17:11.022 }' 00:17:11.022 10:42:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.022 10:42:37 -- common/autotest_common.sh@10 -- # set +x 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:11.956 [2024-07-24 10:42:38.558930] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@430 -- # '[' cbfa3f5c-6192-4e8b-b700-d50ed14eea17 '!=' cbfa3f5c-6192-4e8b-b700-d50ed14eea17 ']' 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:11.956 10:42:38 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:12.214 [2024-07-24 10:42:38.814793] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.214 10:42:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.472 10:42:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.472 "name": "raid_bdev1", 00:17:12.472 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:12.472 "strip_size_kb": 0, 00:17:12.472 "state": "online", 00:17:12.472 "raid_level": "raid1", 00:17:12.472 "superblock": true, 00:17:12.472 "num_base_bdevs": 3, 00:17:12.472 "num_base_bdevs_discovered": 2, 00:17:12.472 "num_base_bdevs_operational": 2, 00:17:12.472 "base_bdevs_list": [ 00:17:12.472 { 00:17:12.472 "name": null, 00:17:12.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.472 "is_configured": false, 00:17:12.472 "data_offset": 2048, 00:17:12.472 "data_size": 63488 00:17:12.472 }, 00:17:12.472 { 00:17:12.472 "name": "pt2", 00:17:12.472 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:12.472 "is_configured": true, 00:17:12.472 "data_offset": 2048, 00:17:12.472 "data_size": 63488 00:17:12.472 }, 00:17:12.472 { 00:17:12.472 "name": "pt3", 00:17:12.472 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:12.472 "is_configured": true, 00:17:12.472 "data_offset": 2048, 00:17:12.472 "data_size": 63488 00:17:12.472 } 00:17:12.472 ] 00:17:12.472 }' 00:17:12.472 10:42:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.472 10:42:39 -- common/autotest_common.sh@10 -- # set +x 00:17:13.038 10:42:39 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:13.295 [2024-07-24 10:42:39.930986] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.296 [2024-07-24 10:42:39.931299] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.296 [2024-07-24 10:42:39.931558] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.296 [2024-07-24 10:42:39.931758] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.296 [2024-07-24 10:42:39.931873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:13.296 10:42:39 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.296 10:42:39 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:13.554 10:42:40 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:13.554 10:42:40 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:13.554 10:42:40 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:13.554 10:42:40 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:13.554 10:42:40 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:14.119 10:42:40 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:14.119 10:42:40 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:14.119 10:42:40 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:14.377 10:42:40 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:14.377 10:42:40 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:14.377 10:42:40 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:14.377 10:42:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:14.377 10:42:40 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.377 [2024-07-24 10:42:41.015254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.377 [2024-07-24 10:42:41.015727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.377 [2024-07-24 10:42:41.015891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:14.377 [2024-07-24 10:42:41.016042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.377 [2024-07-24 10:42:41.018812] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.377 [2024-07-24 10:42:41.019022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.377 [2024-07-24 10:42:41.019258] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:14.377 [2024-07-24 10:42:41.019426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.377 pt2 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.377 10:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.942 10:42:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.942 "name": "raid_bdev1", 00:17:14.942 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:14.942 "strip_size_kb": 0, 00:17:14.943 "state": "configuring", 00:17:14.943 "raid_level": "raid1", 00:17:14.943 "superblock": true, 00:17:14.943 "num_base_bdevs": 3, 00:17:14.943 "num_base_bdevs_discovered": 1, 00:17:14.943 "num_base_bdevs_operational": 2, 00:17:14.943 "base_bdevs_list": [ 00:17:14.943 { 00:17:14.943 "name": null, 00:17:14.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.943 "is_configured": false, 00:17:14.943 "data_offset": 2048, 00:17:14.943 "data_size": 63488 00:17:14.943 }, 00:17:14.943 { 00:17:14.943 "name": "pt2", 00:17:14.943 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:14.943 "is_configured": true, 00:17:14.943 "data_offset": 2048, 00:17:14.943 "data_size": 63488 00:17:14.943 }, 00:17:14.943 { 00:17:14.943 "name": null, 00:17:14.943 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:14.943 "is_configured": false, 00:17:14.943 "data_offset": 2048, 00:17:14.943 "data_size": 63488 00:17:14.943 } 00:17:14.943 ] 00:17:14.943 }' 00:17:14.943 10:42:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.943 10:42:41 -- common/autotest_common.sh@10 -- # set +x 00:17:15.509 10:42:42 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:15.509 10:42:42 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:15.509 10:42:42 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:15.509 10:42:42 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:15.767 [2024-07-24 10:42:42.223806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:15.767 [2024-07-24 10:42:42.224245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.767 [2024-07-24 10:42:42.224343] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:15.767 [2024-07-24 10:42:42.224612] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.767 [2024-07-24 10:42:42.225349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.767 [2024-07-24 10:42:42.225519] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:15.767 [2024-07-24 10:42:42.225752] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:15.767 [2024-07-24 10:42:42.225885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:15.767 [2024-07-24 10:42:42.226068] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:15.767 [2024-07-24 10:42:42.226180] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.767 [2024-07-24 10:42:42.226368] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:17:15.767 [2024-07-24 10:42:42.226869] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:15.767 [2024-07-24 10:42:42.227008] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:15.767 [2024-07-24 10:42:42.227221] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.767 pt3 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.767 10:42:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.025 10:42:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:16.025 "name": "raid_bdev1", 00:17:16.025 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:16.025 "strip_size_kb": 0, 00:17:16.025 "state": "online", 00:17:16.025 "raid_level": "raid1", 00:17:16.025 "superblock": true, 00:17:16.025 "num_base_bdevs": 3, 00:17:16.025 "num_base_bdevs_discovered": 2, 00:17:16.025 "num_base_bdevs_operational": 2, 00:17:16.025 "base_bdevs_list": [ 00:17:16.025 { 00:17:16.025 "name": null, 00:17:16.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.025 "is_configured": false, 00:17:16.025 "data_offset": 2048, 00:17:16.025 "data_size": 63488 00:17:16.025 }, 00:17:16.025 { 00:17:16.025 "name": "pt2", 00:17:16.025 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:16.025 "is_configured": true, 00:17:16.025 "data_offset": 2048, 00:17:16.025 "data_size": 63488 00:17:16.025 }, 00:17:16.025 { 00:17:16.025 "name": "pt3", 00:17:16.025 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:16.025 "is_configured": true, 00:17:16.025 "data_offset": 2048, 00:17:16.025 "data_size": 63488 00:17:16.025 } 00:17:16.025 ] 00:17:16.025 }' 00:17:16.025 10:42:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:16.025 10:42:42 -- common/autotest_common.sh@10 -- # set +x 00:17:16.590 10:42:43 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:16.590 10:42:43 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:16.849 [2024-07-24 10:42:43.420037] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.849 [2024-07-24 10:42:43.420353] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.849 [2024-07-24 10:42:43.420590] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.849 [2024-07-24 10:42:43.420799] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.849 [2024-07-24 10:42:43.420923] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:16.849 10:42:43 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.849 10:42:43 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:17.106 10:42:43 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:17.106 10:42:43 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:17.106 10:42:43 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.364 [2024-07-24 10:42:43.896192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.364 [2024-07-24 10:42:43.896717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.364 [2024-07-24 10:42:43.896891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:17.364 [2024-07-24 10:42:43.897032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.364 [2024-07-24 10:42:43.899981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.364 [2024-07-24 10:42:43.900170] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.364 [2024-07-24 10:42:43.900455] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:17.364 [2024-07-24 10:42:43.900633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.364 pt1 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.364 10:42:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.622 10:42:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.622 "name": "raid_bdev1", 00:17:17.622 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:17.622 "strip_size_kb": 0, 00:17:17.622 "state": "configuring", 00:17:17.622 "raid_level": "raid1", 00:17:17.622 "superblock": true, 00:17:17.622 "num_base_bdevs": 3, 00:17:17.622 "num_base_bdevs_discovered": 1, 00:17:17.622 "num_base_bdevs_operational": 3, 00:17:17.622 "base_bdevs_list": [ 00:17:17.622 { 00:17:17.622 "name": "pt1", 00:17:17.622 "uuid": "e95d05eb-ef1c-5b0d-ae2b-4c49c821f1a5", 00:17:17.622 "is_configured": true, 00:17:17.622 "data_offset": 2048, 00:17:17.622 "data_size": 63488 00:17:17.622 }, 00:17:17.622 { 00:17:17.622 "name": null, 00:17:17.622 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:17.622 "is_configured": false, 00:17:17.622 "data_offset": 2048, 00:17:17.622 "data_size": 63488 00:17:17.622 }, 00:17:17.622 { 00:17:17.622 "name": null, 00:17:17.622 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:17.622 "is_configured": false, 00:17:17.622 "data_offset": 2048, 00:17:17.622 "data_size": 63488 00:17:17.622 } 00:17:17.622 ] 00:17:17.622 }' 00:17:17.622 10:42:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.622 10:42:44 -- common/autotest_common.sh@10 -- # set +x 00:17:18.187 10:42:44 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:18.187 10:42:44 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.187 10:42:44 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:18.445 10:42:45 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:18.445 10:42:45 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.445 10:42:45 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:18.703 10:42:45 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:18.703 10:42:45 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.703 10:42:45 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:18.703 10:42:45 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.962 [2024-07-24 10:42:45.529069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.962 [2024-07-24 10:42:45.530178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.962 [2024-07-24 10:42:45.530604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:18.962 [2024-07-24 10:42:45.530908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.962 [2024-07-24 10:42:45.532122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.962 [2024-07-24 10:42:45.532449] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.962 [2024-07-24 10:42:45.532964] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:18.962 [2024-07-24 10:42:45.533209] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:18.962 [2024-07-24 10:42:45.533466] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.962 [2024-07-24 10:42:45.533735] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:17:18.962 [2024-07-24 10:42:45.534038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.962 pt3 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.962 10:42:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.220 10:42:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.220 "name": "raid_bdev1", 00:17:19.220 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:19.220 "strip_size_kb": 0, 00:17:19.220 "state": "configuring", 00:17:19.220 "raid_level": "raid1", 00:17:19.220 "superblock": true, 00:17:19.220 "num_base_bdevs": 3, 00:17:19.220 "num_base_bdevs_discovered": 1, 00:17:19.220 "num_base_bdevs_operational": 2, 00:17:19.220 "base_bdevs_list": [ 00:17:19.220 { 00:17:19.220 "name": null, 00:17:19.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.220 "is_configured": false, 00:17:19.220 "data_offset": 2048, 00:17:19.220 "data_size": 63488 00:17:19.220 }, 00:17:19.220 { 00:17:19.220 "name": null, 00:17:19.220 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:19.220 "is_configured": false, 00:17:19.220 "data_offset": 2048, 00:17:19.220 "data_size": 63488 00:17:19.220 }, 00:17:19.220 { 00:17:19.220 "name": "pt3", 00:17:19.220 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:19.220 "is_configured": true, 00:17:19.220 "data_offset": 2048, 00:17:19.220 "data_size": 63488 00:17:19.220 } 00:17:19.220 ] 00:17:19.220 }' 00:17:19.220 10:42:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.220 10:42:45 -- common/autotest_common.sh@10 -- # set +x 00:17:19.787 10:42:46 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:19.787 10:42:46 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:19.787 10:42:46 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.045 [2024-07-24 10:42:46.656191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.045 [2024-07-24 10:42:46.656673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.045 [2024-07-24 10:42:46.656919] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:20.045 [2024-07-24 10:42:46.657148] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.045 [2024-07-24 10:42:46.658108] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.045 [2024-07-24 10:42:46.658321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.045 [2024-07-24 10:42:46.658654] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:20.045 [2024-07-24 10:42:46.658872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.045 [2024-07-24 10:42:46.659161] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:17:20.045 [2024-07-24 10:42:46.659333] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:20.045 [2024-07-24 10:42:46.659471] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:17:20.045 [2024-07-24 10:42:46.660039] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:17:20.045 [2024-07-24 10:42:46.660196] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:17:20.045 [2024-07-24 10:42:46.660533] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.045 pt2 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.045 10:42:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.046 10:42:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.046 10:42:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.046 10:42:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.046 10:42:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.304 10:42:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.304 "name": "raid_bdev1", 00:17:20.304 "uuid": "cbfa3f5c-6192-4e8b-b700-d50ed14eea17", 00:17:20.304 "strip_size_kb": 0, 00:17:20.304 "state": "online", 00:17:20.304 "raid_level": "raid1", 00:17:20.304 "superblock": true, 00:17:20.304 "num_base_bdevs": 3, 00:17:20.304 "num_base_bdevs_discovered": 2, 00:17:20.304 "num_base_bdevs_operational": 2, 00:17:20.304 "base_bdevs_list": [ 00:17:20.304 { 00:17:20.304 "name": null, 00:17:20.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.304 "is_configured": false, 00:17:20.304 "data_offset": 2048, 00:17:20.304 "data_size": 63488 00:17:20.304 }, 00:17:20.304 { 00:17:20.304 "name": "pt2", 00:17:20.304 "uuid": "57f8347e-ba9c-5bba-8169-f3d630fae27a", 00:17:20.304 "is_configured": true, 00:17:20.304 "data_offset": 2048, 00:17:20.304 "data_size": 63488 00:17:20.304 }, 00:17:20.304 { 00:17:20.304 "name": "pt3", 00:17:20.304 "uuid": "2ea99805-2019-55c3-926a-172a0073e3f4", 00:17:20.304 "is_configured": true, 00:17:20.304 "data_offset": 2048, 00:17:20.304 "data_size": 63488 00:17:20.304 } 00:17:20.304 ] 00:17:20.304 }' 00:17:20.304 10:42:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.304 10:42:46 -- common/autotest_common.sh@10 -- # set +x 00:17:21.267 10:42:47 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:21.267 10:42:47 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:21.267 [2024-07-24 10:42:47.881217] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.267 10:42:47 -- bdev/bdev_raid.sh@506 -- # '[' cbfa3f5c-6192-4e8b-b700-d50ed14eea17 '!=' cbfa3f5c-6192-4e8b-b700-d50ed14eea17 ']' 00:17:21.267 10:42:47 -- bdev/bdev_raid.sh@511 -- # killprocess 128664 00:17:21.267 10:42:47 -- common/autotest_common.sh@926 -- # '[' -z 128664 ']' 00:17:21.267 10:42:47 -- common/autotest_common.sh@930 -- # kill -0 128664 00:17:21.267 10:42:47 -- common/autotest_common.sh@931 -- # uname 00:17:21.267 10:42:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.267 10:42:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128664 00:17:21.267 killing process with pid 128664 00:17:21.267 10:42:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.267 10:42:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.267 10:42:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128664' 00:17:21.267 10:42:47 -- common/autotest_common.sh@945 -- # kill 128664 00:17:21.267 10:42:47 -- common/autotest_common.sh@950 -- # wait 128664 00:17:21.267 [2024-07-24 10:42:47.927756] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.267 [2024-07-24 10:42:47.927888] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.267 [2024-07-24 10:42:47.928014] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.267 [2024-07-24 10:42:47.928215] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:17:21.525 [2024-07-24 10:42:47.976091] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:21.783 ************************************ 00:17:21.783 END TEST raid_superblock_test 00:17:21.783 ************************************ 00:17:21.783 10:42:48 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:21.783 00:17:21.783 real 0m20.319s 00:17:21.783 user 0m37.935s 00:17:21.783 sys 0m2.618s 00:17:21.783 10:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:21.783 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:17:21.783 10:42:48 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:21.783 10:42:48 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:21.783 10:42:48 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:21.783 10:42:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:21.783 10:42:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:21.783 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:17:21.783 ************************************ 00:17:21.783 START TEST raid_state_function_test 00:17:21.783 ************************************ 00:17:21.783 10:42:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:17:21.783 10:42:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:21.783 10:42:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=129287 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129287' 00:17:21.784 Process raid pid: 129287 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:21.784 10:42:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129287 /var/tmp/spdk-raid.sock 00:17:21.784 10:42:48 -- common/autotest_common.sh@819 -- # '[' -z 129287 ']' 00:17:21.784 10:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:21.784 10:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.784 10:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:21.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:21.784 10:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.784 10:42:48 -- common/autotest_common.sh@10 -- # set +x 00:17:21.784 [2024-07-24 10:42:48.446720] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:21.784 [2024-07-24 10:42:48.447283] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.042 [2024-07-24 10:42:48.597477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.042 [2024-07-24 10:42:48.700328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.300 [2024-07-24 10:42:48.758846] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:22.866 10:42:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.866 10:42:49 -- common/autotest_common.sh@852 -- # return 0 00:17:22.866 10:42:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:23.125 [2024-07-24 10:42:49.699028] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.125 [2024-07-24 10:42:49.699423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.125 [2024-07-24 10:42:49.699591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.125 [2024-07-24 10:42:49.699669] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.125 [2024-07-24 10:42:49.699877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.125 [2024-07-24 10:42:49.699986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.125 [2024-07-24 10:42:49.700126] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:23.125 [2024-07-24 10:42:49.700204] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.125 10:42:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.383 10:42:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.383 "name": "Existed_Raid", 00:17:23.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.383 "strip_size_kb": 64, 00:17:23.383 "state": "configuring", 00:17:23.383 "raid_level": "raid0", 00:17:23.383 "superblock": false, 00:17:23.383 "num_base_bdevs": 4, 00:17:23.383 "num_base_bdevs_discovered": 0, 00:17:23.383 "num_base_bdevs_operational": 4, 00:17:23.383 "base_bdevs_list": [ 00:17:23.383 { 00:17:23.383 "name": "BaseBdev1", 00:17:23.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.383 "is_configured": false, 00:17:23.383 "data_offset": 0, 00:17:23.383 "data_size": 0 00:17:23.383 }, 00:17:23.383 { 00:17:23.383 "name": "BaseBdev2", 00:17:23.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.383 "is_configured": false, 00:17:23.383 "data_offset": 0, 00:17:23.383 "data_size": 0 00:17:23.383 }, 00:17:23.383 { 00:17:23.383 "name": "BaseBdev3", 00:17:23.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.383 "is_configured": false, 00:17:23.383 "data_offset": 0, 00:17:23.383 "data_size": 0 00:17:23.383 }, 00:17:23.383 { 00:17:23.383 "name": "BaseBdev4", 00:17:23.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.383 "is_configured": false, 00:17:23.383 "data_offset": 0, 00:17:23.383 "data_size": 0 00:17:23.383 } 00:17:23.383 ] 00:17:23.383 }' 00:17:23.383 10:42:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.383 10:42:49 -- common/autotest_common.sh@10 -- # set +x 00:17:23.947 10:42:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.206 [2024-07-24 10:42:50.859084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.206 [2024-07-24 10:42:50.859368] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:24.206 10:42:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:24.773 [2024-07-24 10:42:51.179234] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.773 [2024-07-24 10:42:51.179632] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.773 [2024-07-24 10:42:51.179774] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.773 [2024-07-24 10:42:51.179852] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.773 [2024-07-24 10:42:51.180010] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.773 [2024-07-24 10:42:51.180077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.773 [2024-07-24 10:42:51.180371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.773 [2024-07-24 10:42:51.180455] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.773 10:42:51 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.032 [2024-07-24 10:42:51.466926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.032 BaseBdev1 00:17:25.032 10:42:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:25.032 10:42:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:25.032 10:42:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:25.032 10:42:51 -- common/autotest_common.sh@889 -- # local i 00:17:25.032 10:42:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:25.032 10:42:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:25.032 10:42:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.032 10:42:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.290 [ 00:17:25.290 { 00:17:25.290 "name": "BaseBdev1", 00:17:25.290 "aliases": [ 00:17:25.290 "155a77fd-e61b-48ba-b07b-39c7ed78ac93" 00:17:25.290 ], 00:17:25.290 "product_name": "Malloc disk", 00:17:25.290 "block_size": 512, 00:17:25.290 "num_blocks": 65536, 00:17:25.290 "uuid": "155a77fd-e61b-48ba-b07b-39c7ed78ac93", 00:17:25.291 "assigned_rate_limits": { 00:17:25.291 "rw_ios_per_sec": 0, 00:17:25.291 "rw_mbytes_per_sec": 0, 00:17:25.291 "r_mbytes_per_sec": 0, 00:17:25.291 "w_mbytes_per_sec": 0 00:17:25.291 }, 00:17:25.291 "claimed": true, 00:17:25.291 "claim_type": "exclusive_write", 00:17:25.291 "zoned": false, 00:17:25.291 "supported_io_types": { 00:17:25.291 "read": true, 00:17:25.291 "write": true, 00:17:25.291 "unmap": true, 00:17:25.291 "write_zeroes": true, 00:17:25.291 "flush": true, 00:17:25.291 "reset": true, 00:17:25.291 "compare": false, 00:17:25.291 "compare_and_write": false, 00:17:25.291 "abort": true, 00:17:25.291 "nvme_admin": false, 00:17:25.291 "nvme_io": false 00:17:25.291 }, 00:17:25.291 "memory_domains": [ 00:17:25.291 { 00:17:25.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.291 "dma_device_type": 2 00:17:25.291 } 00:17:25.291 ], 00:17:25.291 "driver_specific": {} 00:17:25.291 } 00:17:25.291 ] 00:17:25.291 10:42:51 -- common/autotest_common.sh@895 -- # return 0 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.291 10:42:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.550 10:42:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.550 "name": "Existed_Raid", 00:17:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.550 "strip_size_kb": 64, 00:17:25.550 "state": "configuring", 00:17:25.550 "raid_level": "raid0", 00:17:25.550 "superblock": false, 00:17:25.550 "num_base_bdevs": 4, 00:17:25.550 "num_base_bdevs_discovered": 1, 00:17:25.550 "num_base_bdevs_operational": 4, 00:17:25.550 "base_bdevs_list": [ 00:17:25.550 { 00:17:25.550 "name": "BaseBdev1", 00:17:25.550 "uuid": "155a77fd-e61b-48ba-b07b-39c7ed78ac93", 00:17:25.550 "is_configured": true, 00:17:25.550 "data_offset": 0, 00:17:25.550 "data_size": 65536 00:17:25.550 }, 00:17:25.550 { 00:17:25.550 "name": "BaseBdev2", 00:17:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.550 "is_configured": false, 00:17:25.550 "data_offset": 0, 00:17:25.550 "data_size": 0 00:17:25.550 }, 00:17:25.550 { 00:17:25.550 "name": "BaseBdev3", 00:17:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.550 "is_configured": false, 00:17:25.550 "data_offset": 0, 00:17:25.550 "data_size": 0 00:17:25.550 }, 00:17:25.550 { 00:17:25.550 "name": "BaseBdev4", 00:17:25.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.550 "is_configured": false, 00:17:25.550 "data_offset": 0, 00:17:25.550 "data_size": 0 00:17:25.550 } 00:17:25.550 ] 00:17:25.550 }' 00:17:25.550 10:42:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.550 10:42:52 -- common/autotest_common.sh@10 -- # set +x 00:17:26.117 10:42:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.376 [2024-07-24 10:42:53.023390] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.376 [2024-07-24 10:42:53.023815] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:26.376 10:42:53 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:26.376 10:42:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.635 [2024-07-24 10:42:53.295704] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.635 [2024-07-24 10:42:53.298472] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.635 [2024-07-24 10:42:53.298762] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.635 [2024-07-24 10:42:53.298934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.635 [2024-07-24 10:42:53.299019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.635 [2024-07-24 10:42:53.299175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.635 [2024-07-24 10:42:53.299252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.635 10:42:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.892 10:42:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.892 10:42:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.892 "name": "Existed_Raid", 00:17:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.892 "strip_size_kb": 64, 00:17:26.892 "state": "configuring", 00:17:26.892 "raid_level": "raid0", 00:17:26.892 "superblock": false, 00:17:26.892 "num_base_bdevs": 4, 00:17:26.892 "num_base_bdevs_discovered": 1, 00:17:26.892 "num_base_bdevs_operational": 4, 00:17:26.892 "base_bdevs_list": [ 00:17:26.892 { 00:17:26.892 "name": "BaseBdev1", 00:17:26.892 "uuid": "155a77fd-e61b-48ba-b07b-39c7ed78ac93", 00:17:26.892 "is_configured": true, 00:17:26.892 "data_offset": 0, 00:17:26.892 "data_size": 65536 00:17:26.892 }, 00:17:26.892 { 00:17:26.892 "name": "BaseBdev2", 00:17:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.892 "is_configured": false, 00:17:26.892 "data_offset": 0, 00:17:26.892 "data_size": 0 00:17:26.892 }, 00:17:26.892 { 00:17:26.892 "name": "BaseBdev3", 00:17:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.892 "is_configured": false, 00:17:26.892 "data_offset": 0, 00:17:26.892 "data_size": 0 00:17:26.892 }, 00:17:26.892 { 00:17:26.892 "name": "BaseBdev4", 00:17:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.892 "is_configured": false, 00:17:26.892 "data_offset": 0, 00:17:26.892 "data_size": 0 00:17:26.892 } 00:17:26.892 ] 00:17:26.892 }' 00:17:26.892 10:42:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.892 10:42:53 -- common/autotest_common.sh@10 -- # set +x 00:17:27.828 10:42:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:27.828 [2024-07-24 10:42:54.444944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.828 BaseBdev2 00:17:27.828 10:42:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:27.828 10:42:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:27.828 10:42:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:27.828 10:42:54 -- common/autotest_common.sh@889 -- # local i 00:17:27.828 10:42:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:27.828 10:42:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:27.828 10:42:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.087 10:42:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.345 [ 00:17:28.345 { 00:17:28.345 "name": "BaseBdev2", 00:17:28.345 "aliases": [ 00:17:28.345 "a03986d8-425f-41fd-aa41-d223c30673a0" 00:17:28.345 ], 00:17:28.345 "product_name": "Malloc disk", 00:17:28.345 "block_size": 512, 00:17:28.345 "num_blocks": 65536, 00:17:28.345 "uuid": "a03986d8-425f-41fd-aa41-d223c30673a0", 00:17:28.345 "assigned_rate_limits": { 00:17:28.345 "rw_ios_per_sec": 0, 00:17:28.345 "rw_mbytes_per_sec": 0, 00:17:28.345 "r_mbytes_per_sec": 0, 00:17:28.345 "w_mbytes_per_sec": 0 00:17:28.345 }, 00:17:28.345 "claimed": true, 00:17:28.345 "claim_type": "exclusive_write", 00:17:28.345 "zoned": false, 00:17:28.345 "supported_io_types": { 00:17:28.345 "read": true, 00:17:28.345 "write": true, 00:17:28.345 "unmap": true, 00:17:28.345 "write_zeroes": true, 00:17:28.345 "flush": true, 00:17:28.345 "reset": true, 00:17:28.346 "compare": false, 00:17:28.346 "compare_and_write": false, 00:17:28.346 "abort": true, 00:17:28.346 "nvme_admin": false, 00:17:28.346 "nvme_io": false 00:17:28.346 }, 00:17:28.346 "memory_domains": [ 00:17:28.346 { 00:17:28.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.346 "dma_device_type": 2 00:17:28.346 } 00:17:28.346 ], 00:17:28.346 "driver_specific": {} 00:17:28.346 } 00:17:28.346 ] 00:17:28.346 10:42:54 -- common/autotest_common.sh@895 -- # return 0 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.346 10:42:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.604 10:42:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.604 "name": "Existed_Raid", 00:17:28.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.604 "strip_size_kb": 64, 00:17:28.604 "state": "configuring", 00:17:28.604 "raid_level": "raid0", 00:17:28.604 "superblock": false, 00:17:28.604 "num_base_bdevs": 4, 00:17:28.604 "num_base_bdevs_discovered": 2, 00:17:28.604 "num_base_bdevs_operational": 4, 00:17:28.604 "base_bdevs_list": [ 00:17:28.604 { 00:17:28.604 "name": "BaseBdev1", 00:17:28.604 "uuid": "155a77fd-e61b-48ba-b07b-39c7ed78ac93", 00:17:28.604 "is_configured": true, 00:17:28.604 "data_offset": 0, 00:17:28.604 "data_size": 65536 00:17:28.604 }, 00:17:28.604 { 00:17:28.604 "name": "BaseBdev2", 00:17:28.604 "uuid": "a03986d8-425f-41fd-aa41-d223c30673a0", 00:17:28.604 "is_configured": true, 00:17:28.604 "data_offset": 0, 00:17:28.604 "data_size": 65536 00:17:28.604 }, 00:17:28.604 { 00:17:28.604 "name": "BaseBdev3", 00:17:28.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.604 "is_configured": false, 00:17:28.604 "data_offset": 0, 00:17:28.604 "data_size": 0 00:17:28.604 }, 00:17:28.605 { 00:17:28.605 "name": "BaseBdev4", 00:17:28.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.605 "is_configured": false, 00:17:28.605 "data_offset": 0, 00:17:28.605 "data_size": 0 00:17:28.605 } 00:17:28.605 ] 00:17:28.605 }' 00:17:28.605 10:42:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.605 10:42:55 -- common/autotest_common.sh@10 -- # set +x 00:17:29.541 10:42:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.541 [2024-07-24 10:42:56.198187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.541 BaseBdev3 00:17:29.541 10:42:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:29.541 10:42:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:29.541 10:42:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:29.541 10:42:56 -- common/autotest_common.sh@889 -- # local i 00:17:29.541 10:42:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:29.541 10:42:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:29.541 10:42:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.113 10:42:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:30.113 [ 00:17:30.113 { 00:17:30.113 "name": "BaseBdev3", 00:17:30.113 "aliases": [ 00:17:30.113 "c3f8435e-0781-4596-a206-d84007acba2a" 00:17:30.113 ], 00:17:30.113 "product_name": "Malloc disk", 00:17:30.113 "block_size": 512, 00:17:30.113 "num_blocks": 65536, 00:17:30.113 "uuid": "c3f8435e-0781-4596-a206-d84007acba2a", 00:17:30.113 "assigned_rate_limits": { 00:17:30.113 "rw_ios_per_sec": 0, 00:17:30.113 "rw_mbytes_per_sec": 0, 00:17:30.113 "r_mbytes_per_sec": 0, 00:17:30.113 "w_mbytes_per_sec": 0 00:17:30.113 }, 00:17:30.113 "claimed": true, 00:17:30.113 "claim_type": "exclusive_write", 00:17:30.113 "zoned": false, 00:17:30.113 "supported_io_types": { 00:17:30.113 "read": true, 00:17:30.113 "write": true, 00:17:30.113 "unmap": true, 00:17:30.113 "write_zeroes": true, 00:17:30.113 "flush": true, 00:17:30.113 "reset": true, 00:17:30.113 "compare": false, 00:17:30.113 "compare_and_write": false, 00:17:30.113 "abort": true, 00:17:30.113 "nvme_admin": false, 00:17:30.113 "nvme_io": false 00:17:30.113 }, 00:17:30.113 "memory_domains": [ 00:17:30.113 { 00:17:30.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.113 "dma_device_type": 2 00:17:30.113 } 00:17:30.113 ], 00:17:30.113 "driver_specific": {} 00:17:30.113 } 00:17:30.113 ] 00:17:30.113 10:42:56 -- common/autotest_common.sh@895 -- # return 0 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.113 10:42:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.387 10:42:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.387 "name": "Existed_Raid", 00:17:30.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.387 "strip_size_kb": 64, 00:17:30.387 "state": "configuring", 00:17:30.387 "raid_level": "raid0", 00:17:30.387 "superblock": false, 00:17:30.387 "num_base_bdevs": 4, 00:17:30.387 "num_base_bdevs_discovered": 3, 00:17:30.387 "num_base_bdevs_operational": 4, 00:17:30.387 "base_bdevs_list": [ 00:17:30.387 { 00:17:30.387 "name": "BaseBdev1", 00:17:30.387 "uuid": "155a77fd-e61b-48ba-b07b-39c7ed78ac93", 00:17:30.387 "is_configured": true, 00:17:30.387 "data_offset": 0, 00:17:30.387 "data_size": 65536 00:17:30.387 }, 00:17:30.387 { 00:17:30.387 "name": "BaseBdev2", 00:17:30.387 "uuid": "a03986d8-425f-41fd-aa41-d223c30673a0", 00:17:30.387 "is_configured": true, 00:17:30.387 "data_offset": 0, 00:17:30.387 "data_size": 65536 00:17:30.387 }, 00:17:30.387 { 00:17:30.387 "name": "BaseBdev3", 00:17:30.387 "uuid": "c3f8435e-0781-4596-a206-d84007acba2a", 00:17:30.387 "is_configured": true, 00:17:30.387 "data_offset": 0, 00:17:30.387 "data_size": 65536 00:17:30.387 }, 00:17:30.387 { 00:17:30.387 "name": "BaseBdev4", 00:17:30.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.387 "is_configured": false, 00:17:30.387 "data_offset": 0, 00:17:30.387 "data_size": 0 00:17:30.387 } 00:17:30.387 ] 00:17:30.387 }' 00:17:30.387 10:42:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.387 10:42:56 -- common/autotest_common.sh@10 -- # set +x 00:17:30.954 10:42:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:31.212 [2024-07-24 10:42:57.839000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:31.212 [2024-07-24 10:42:57.839446] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:17:31.212 [2024-07-24 10:42:57.839505] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:31.212 [2024-07-24 10:42:57.839906] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:17:31.212 [2024-07-24 10:42:57.840471] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:17:31.212 [2024-07-24 10:42:57.840673] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:17:31.212 [2024-07-24 10:42:57.841118] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.213 BaseBdev4 00:17:31.213 10:42:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:31.213 10:42:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:31.213 10:42:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:31.213 10:42:57 -- common/autotest_common.sh@889 -- # local i 00:17:31.213 10:42:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:31.213 10:42:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:31.213 10:42:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.471 10:42:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:31.730 [ 00:17:31.730 { 00:17:31.730 "name": "BaseBdev4", 00:17:31.730 "aliases": [ 00:17:31.730 "c57f50d1-01dc-4d53-b34c-bbe9e884d3f1" 00:17:31.730 ], 00:17:31.730 "product_name": "Malloc disk", 00:17:31.730 "block_size": 512, 00:17:31.730 "num_blocks": 65536, 00:17:31.730 "uuid": "c57f50d1-01dc-4d53-b34c-bbe9e884d3f1", 00:17:31.730 "assigned_rate_limits": { 00:17:31.730 "rw_ios_per_sec": 0, 00:17:31.730 "rw_mbytes_per_sec": 0, 00:17:31.730 "r_mbytes_per_sec": 0, 00:17:31.730 "w_mbytes_per_sec": 0 00:17:31.730 }, 00:17:31.730 "claimed": true, 00:17:31.730 "claim_type": "exclusive_write", 00:17:31.730 "zoned": false, 00:17:31.730 "supported_io_types": { 00:17:31.730 "read": true, 00:17:31.730 "write": true, 00:17:31.730 "unmap": true, 00:17:31.730 "write_zeroes": true, 00:17:31.730 "flush": true, 00:17:31.730 "reset": true, 00:17:31.730 "compare": false, 00:17:31.730 "compare_and_write": false, 00:17:31.730 "abort": true, 00:17:31.730 "nvme_admin": false, 00:17:31.730 "nvme_io": false 00:17:31.730 }, 00:17:31.730 "memory_domains": [ 00:17:31.730 { 00:17:31.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.730 "dma_device_type": 2 00:17:31.730 } 00:17:31.730 ], 00:17:31.730 "driver_specific": {} 00:17:31.730 } 00:17:31.730 ] 00:17:31.730 10:42:58 -- common/autotest_common.sh@895 -- # return 0 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.730 10:42:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.989 10:42:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:31.989 "name": "Existed_Raid", 00:17:31.989 "uuid": "74de6e8e-9971-4b25-8b78-8d96e845b71b", 00:17:31.989 "strip_size_kb": 64, 00:17:31.989 "state": "online", 00:17:31.989 "raid_level": "raid0", 00:17:31.989 "superblock": false, 00:17:31.989 "num_base_bdevs": 4, 00:17:31.989 "num_base_bdevs_discovered": 4, 00:17:31.989 "num_base_bdevs_operational": 4, 00:17:31.989 "base_bdevs_list": [ 00:17:31.989 { 00:17:31.989 "name": "BaseBdev1", 00:17:31.989 "uuid": "155a77fd-e61b-48ba-b07b-39c7ed78ac93", 00:17:31.989 "is_configured": true, 00:17:31.989 "data_offset": 0, 00:17:31.989 "data_size": 65536 00:17:31.989 }, 00:17:31.989 { 00:17:31.989 "name": "BaseBdev2", 00:17:31.989 "uuid": "a03986d8-425f-41fd-aa41-d223c30673a0", 00:17:31.989 "is_configured": true, 00:17:31.989 "data_offset": 0, 00:17:31.989 "data_size": 65536 00:17:31.989 }, 00:17:31.989 { 00:17:31.989 "name": "BaseBdev3", 00:17:31.989 "uuid": "c3f8435e-0781-4596-a206-d84007acba2a", 00:17:31.989 "is_configured": true, 00:17:31.989 "data_offset": 0, 00:17:31.989 "data_size": 65536 00:17:31.989 }, 00:17:31.989 { 00:17:31.989 "name": "BaseBdev4", 00:17:31.989 "uuid": "c57f50d1-01dc-4d53-b34c-bbe9e884d3f1", 00:17:31.989 "is_configured": true, 00:17:31.989 "data_offset": 0, 00:17:31.989 "data_size": 65536 00:17:31.989 } 00:17:31.989 ] 00:17:31.989 }' 00:17:31.989 10:42:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:31.989 10:42:58 -- common/autotest_common.sh@10 -- # set +x 00:17:32.554 10:42:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.812 [2024-07-24 10:42:59.455747] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.812 [2024-07-24 10:42:59.456111] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.812 [2024-07-24 10:42:59.456385] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.812 10:42:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.813 10:42:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.813 10:42:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.071 10:42:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.071 "name": "Existed_Raid", 00:17:33.071 "uuid": "74de6e8e-9971-4b25-8b78-8d96e845b71b", 00:17:33.071 "strip_size_kb": 64, 00:17:33.071 "state": "offline", 00:17:33.071 "raid_level": "raid0", 00:17:33.071 "superblock": false, 00:17:33.071 "num_base_bdevs": 4, 00:17:33.071 "num_base_bdevs_discovered": 3, 00:17:33.071 "num_base_bdevs_operational": 3, 00:17:33.071 "base_bdevs_list": [ 00:17:33.071 { 00:17:33.071 "name": null, 00:17:33.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.071 "is_configured": false, 00:17:33.071 "data_offset": 0, 00:17:33.071 "data_size": 65536 00:17:33.071 }, 00:17:33.071 { 00:17:33.071 "name": "BaseBdev2", 00:17:33.071 "uuid": "a03986d8-425f-41fd-aa41-d223c30673a0", 00:17:33.071 "is_configured": true, 00:17:33.071 "data_offset": 0, 00:17:33.071 "data_size": 65536 00:17:33.071 }, 00:17:33.071 { 00:17:33.071 "name": "BaseBdev3", 00:17:33.071 "uuid": "c3f8435e-0781-4596-a206-d84007acba2a", 00:17:33.071 "is_configured": true, 00:17:33.071 "data_offset": 0, 00:17:33.071 "data_size": 65536 00:17:33.071 }, 00:17:33.071 { 00:17:33.071 "name": "BaseBdev4", 00:17:33.071 "uuid": "c57f50d1-01dc-4d53-b34c-bbe9e884d3f1", 00:17:33.071 "is_configured": true, 00:17:33.071 "data_offset": 0, 00:17:33.071 "data_size": 65536 00:17:33.071 } 00:17:33.071 ] 00:17:33.071 }' 00:17:33.071 10:42:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.071 10:42:59 -- common/autotest_common.sh@10 -- # set +x 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.006 10:43:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:34.265 [2024-07-24 10:43:00.840600] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.265 10:43:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.265 10:43:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.265 10:43:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:34.265 10:43:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.523 10:43:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.523 10:43:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.523 10:43:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:34.781 [2024-07-24 10:43:01.384265] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.781 10:43:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.781 10:43:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.781 10:43:01 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.781 10:43:01 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:35.040 10:43:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:35.040 10:43:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:35.040 10:43:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:35.298 [2024-07-24 10:43:01.900878] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:35.298 [2024-07-24 10:43:01.901260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:17:35.298 10:43:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:35.298 10:43:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:35.298 10:43:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.298 10:43:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:35.556 10:43:02 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:35.556 10:43:02 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:35.557 10:43:02 -- bdev/bdev_raid.sh@287 -- # killprocess 129287 00:17:35.557 10:43:02 -- common/autotest_common.sh@926 -- # '[' -z 129287 ']' 00:17:35.557 10:43:02 -- common/autotest_common.sh@930 -- # kill -0 129287 00:17:35.557 10:43:02 -- common/autotest_common.sh@931 -- # uname 00:17:35.815 10:43:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.815 10:43:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129287 00:17:35.815 killing process with pid 129287 00:17:35.815 10:43:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:35.815 10:43:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:35.815 10:43:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129287' 00:17:35.815 10:43:02 -- common/autotest_common.sh@945 -- # kill 129287 00:17:35.815 10:43:02 -- common/autotest_common.sh@950 -- # wait 129287 00:17:35.815 [2024-07-24 10:43:02.264070] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.815 [2024-07-24 10:43:02.264185] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:36.073 ************************************ 00:17:36.073 END TEST raid_state_function_test 00:17:36.073 ************************************ 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:36.073 00:17:36.073 real 0m14.218s 00:17:36.073 user 0m26.284s 00:17:36.073 sys 0m1.708s 00:17:36.073 10:43:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:36.073 10:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:36.073 10:43:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:36.073 10:43:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:36.073 10:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:36.073 ************************************ 00:17:36.073 START TEST raid_state_function_test_sb 00:17:36.073 ************************************ 00:17:36.073 10:43:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=129726 00:17:36.073 10:43:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:36.074 10:43:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 129726' 00:17:36.074 Process raid pid: 129726 00:17:36.074 10:43:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 129726 /var/tmp/spdk-raid.sock 00:17:36.074 10:43:02 -- common/autotest_common.sh@819 -- # '[' -z 129726 ']' 00:17:36.074 10:43:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:36.074 10:43:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.074 10:43:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:36.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:36.074 10:43:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.074 10:43:02 -- common/autotest_common.sh@10 -- # set +x 00:17:36.074 [2024-07-24 10:43:02.723488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:36.074 [2024-07-24 10:43:02.725206] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:36.332 [2024-07-24 10:43:02.875806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.332 [2024-07-24 10:43:02.996810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.590 [2024-07-24 10:43:03.080840] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.155 10:43:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:37.155 10:43:03 -- common/autotest_common.sh@852 -- # return 0 00:17:37.155 10:43:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:37.412 [2024-07-24 10:43:03.962353] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:37.412 [2024-07-24 10:43:03.962774] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:37.412 [2024-07-24 10:43:03.962902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:37.412 [2024-07-24 10:43:03.962968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:37.412 [2024-07-24 10:43:03.963181] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:37.412 [2024-07-24 10:43:03.963283] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:37.412 [2024-07-24 10:43:03.963420] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:37.412 [2024-07-24 10:43:03.963490] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.412 10:43:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.670 10:43:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.670 "name": "Existed_Raid", 00:17:37.670 "uuid": "e5e94afd-d009-4986-9866-6ac2e29dfe75", 00:17:37.670 "strip_size_kb": 64, 00:17:37.670 "state": "configuring", 00:17:37.670 "raid_level": "raid0", 00:17:37.670 "superblock": true, 00:17:37.670 "num_base_bdevs": 4, 00:17:37.670 "num_base_bdevs_discovered": 0, 00:17:37.670 "num_base_bdevs_operational": 4, 00:17:37.670 "base_bdevs_list": [ 00:17:37.670 { 00:17:37.670 "name": "BaseBdev1", 00:17:37.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.670 "is_configured": false, 00:17:37.670 "data_offset": 0, 00:17:37.670 "data_size": 0 00:17:37.670 }, 00:17:37.670 { 00:17:37.670 "name": "BaseBdev2", 00:17:37.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.671 "is_configured": false, 00:17:37.671 "data_offset": 0, 00:17:37.671 "data_size": 0 00:17:37.671 }, 00:17:37.671 { 00:17:37.671 "name": "BaseBdev3", 00:17:37.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.671 "is_configured": false, 00:17:37.671 "data_offset": 0, 00:17:37.671 "data_size": 0 00:17:37.671 }, 00:17:37.671 { 00:17:37.671 "name": "BaseBdev4", 00:17:37.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.671 "is_configured": false, 00:17:37.671 "data_offset": 0, 00:17:37.671 "data_size": 0 00:17:37.671 } 00:17:37.671 ] 00:17:37.671 }' 00:17:37.671 10:43:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.671 10:43:04 -- common/autotest_common.sh@10 -- # set +x 00:17:38.236 10:43:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:38.494 [2024-07-24 10:43:05.174439] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:38.494 [2024-07-24 10:43:05.174739] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:17:38.753 10:43:05 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:39.047 [2024-07-24 10:43:05.446613] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.047 [2024-07-24 10:43:05.446890] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.047 [2024-07-24 10:43:05.447007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.047 [2024-07-24 10:43:05.447080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.047 [2024-07-24 10:43:05.447304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:39.047 [2024-07-24 10:43:05.447374] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:39.047 [2024-07-24 10:43:05.447551] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:39.047 [2024-07-24 10:43:05.447626] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:39.047 10:43:05 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:39.047 [2024-07-24 10:43:05.686551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.047 BaseBdev1 00:17:39.047 10:43:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:39.047 10:43:05 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:39.047 10:43:05 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:39.047 10:43:05 -- common/autotest_common.sh@889 -- # local i 00:17:39.047 10:43:05 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:39.047 10:43:05 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:39.047 10:43:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:39.305 10:43:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:39.565 [ 00:17:39.565 { 00:17:39.565 "name": "BaseBdev1", 00:17:39.565 "aliases": [ 00:17:39.565 "bf03da30-da75-4f8c-bda6-101d63074d47" 00:17:39.565 ], 00:17:39.565 "product_name": "Malloc disk", 00:17:39.565 "block_size": 512, 00:17:39.565 "num_blocks": 65536, 00:17:39.565 "uuid": "bf03da30-da75-4f8c-bda6-101d63074d47", 00:17:39.565 "assigned_rate_limits": { 00:17:39.565 "rw_ios_per_sec": 0, 00:17:39.565 "rw_mbytes_per_sec": 0, 00:17:39.565 "r_mbytes_per_sec": 0, 00:17:39.565 "w_mbytes_per_sec": 0 00:17:39.565 }, 00:17:39.565 "claimed": true, 00:17:39.565 "claim_type": "exclusive_write", 00:17:39.565 "zoned": false, 00:17:39.565 "supported_io_types": { 00:17:39.565 "read": true, 00:17:39.565 "write": true, 00:17:39.565 "unmap": true, 00:17:39.565 "write_zeroes": true, 00:17:39.565 "flush": true, 00:17:39.565 "reset": true, 00:17:39.565 "compare": false, 00:17:39.565 "compare_and_write": false, 00:17:39.565 "abort": true, 00:17:39.565 "nvme_admin": false, 00:17:39.565 "nvme_io": false 00:17:39.565 }, 00:17:39.565 "memory_domains": [ 00:17:39.565 { 00:17:39.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.565 "dma_device_type": 2 00:17:39.565 } 00:17:39.565 ], 00:17:39.565 "driver_specific": {} 00:17:39.565 } 00:17:39.565 ] 00:17:39.565 10:43:06 -- common/autotest_common.sh@895 -- # return 0 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.565 10:43:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.823 10:43:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:39.823 "name": "Existed_Raid", 00:17:39.823 "uuid": "1134167a-5214-4c3c-918f-e3fbfce58f1c", 00:17:39.823 "strip_size_kb": 64, 00:17:39.823 "state": "configuring", 00:17:39.823 "raid_level": "raid0", 00:17:39.823 "superblock": true, 00:17:39.823 "num_base_bdevs": 4, 00:17:39.823 "num_base_bdevs_discovered": 1, 00:17:39.823 "num_base_bdevs_operational": 4, 00:17:39.823 "base_bdevs_list": [ 00:17:39.823 { 00:17:39.823 "name": "BaseBdev1", 00:17:39.823 "uuid": "bf03da30-da75-4f8c-bda6-101d63074d47", 00:17:39.823 "is_configured": true, 00:17:39.823 "data_offset": 2048, 00:17:39.823 "data_size": 63488 00:17:39.823 }, 00:17:39.823 { 00:17:39.823 "name": "BaseBdev2", 00:17:39.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.823 "is_configured": false, 00:17:39.823 "data_offset": 0, 00:17:39.823 "data_size": 0 00:17:39.823 }, 00:17:39.823 { 00:17:39.823 "name": "BaseBdev3", 00:17:39.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.823 "is_configured": false, 00:17:39.823 "data_offset": 0, 00:17:39.823 "data_size": 0 00:17:39.823 }, 00:17:39.823 { 00:17:39.823 "name": "BaseBdev4", 00:17:39.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.823 "is_configured": false, 00:17:39.823 "data_offset": 0, 00:17:39.823 "data_size": 0 00:17:39.823 } 00:17:39.823 ] 00:17:39.823 }' 00:17:39.823 10:43:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:39.823 10:43:06 -- common/autotest_common.sh@10 -- # set +x 00:17:40.757 10:43:07 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:40.757 [2024-07-24 10:43:07.335053] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:40.757 [2024-07-24 10:43:07.335396] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:17:40.757 10:43:07 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:40.757 10:43:07 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:41.015 10:43:07 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.273 BaseBdev1 00:17:41.273 10:43:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:41.273 10:43:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:41.273 10:43:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:41.273 10:43:07 -- common/autotest_common.sh@889 -- # local i 00:17:41.273 10:43:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:41.273 10:43:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:41.273 10:43:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.531 10:43:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:41.789 [ 00:17:41.789 { 00:17:41.789 "name": "BaseBdev1", 00:17:41.789 "aliases": [ 00:17:41.789 "daaf4454-f99c-498d-b6a5-2114d509a40a" 00:17:41.789 ], 00:17:41.789 "product_name": "Malloc disk", 00:17:41.789 "block_size": 512, 00:17:41.789 "num_blocks": 65536, 00:17:41.789 "uuid": "daaf4454-f99c-498d-b6a5-2114d509a40a", 00:17:41.789 "assigned_rate_limits": { 00:17:41.789 "rw_ios_per_sec": 0, 00:17:41.789 "rw_mbytes_per_sec": 0, 00:17:41.789 "r_mbytes_per_sec": 0, 00:17:41.789 "w_mbytes_per_sec": 0 00:17:41.789 }, 00:17:41.789 "claimed": false, 00:17:41.789 "zoned": false, 00:17:41.789 "supported_io_types": { 00:17:41.789 "read": true, 00:17:41.789 "write": true, 00:17:41.789 "unmap": true, 00:17:41.789 "write_zeroes": true, 00:17:41.789 "flush": true, 00:17:41.789 "reset": true, 00:17:41.789 "compare": false, 00:17:41.789 "compare_and_write": false, 00:17:41.789 "abort": true, 00:17:41.789 "nvme_admin": false, 00:17:41.789 "nvme_io": false 00:17:41.789 }, 00:17:41.789 "memory_domains": [ 00:17:41.789 { 00:17:41.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.789 "dma_device_type": 2 00:17:41.789 } 00:17:41.789 ], 00:17:41.789 "driver_specific": {} 00:17:41.789 } 00:17:41.789 ] 00:17:41.789 10:43:08 -- common/autotest_common.sh@895 -- # return 0 00:17:41.789 10:43:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.047 [2024-07-24 10:43:08.515727] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.047 [2024-07-24 10:43:08.518409] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.047 [2024-07-24 10:43:08.518664] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.047 [2024-07-24 10:43:08.518789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.047 [2024-07-24 10:43:08.518862] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.047 [2024-07-24 10:43:08.518970] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.047 [2024-07-24 10:43:08.519049] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.047 10:43:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.305 10:43:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.305 "name": "Existed_Raid", 00:17:42.305 "uuid": "faa8e3d5-4b39-44bb-b3c2-ed899e5e4217", 00:17:42.305 "strip_size_kb": 64, 00:17:42.305 "state": "configuring", 00:17:42.305 "raid_level": "raid0", 00:17:42.305 "superblock": true, 00:17:42.305 "num_base_bdevs": 4, 00:17:42.305 "num_base_bdevs_discovered": 1, 00:17:42.305 "num_base_bdevs_operational": 4, 00:17:42.305 "base_bdevs_list": [ 00:17:42.305 { 00:17:42.305 "name": "BaseBdev1", 00:17:42.305 "uuid": "daaf4454-f99c-498d-b6a5-2114d509a40a", 00:17:42.305 "is_configured": true, 00:17:42.305 "data_offset": 2048, 00:17:42.305 "data_size": 63488 00:17:42.305 }, 00:17:42.305 { 00:17:42.305 "name": "BaseBdev2", 00:17:42.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.305 "is_configured": false, 00:17:42.305 "data_offset": 0, 00:17:42.305 "data_size": 0 00:17:42.305 }, 00:17:42.305 { 00:17:42.305 "name": "BaseBdev3", 00:17:42.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.305 "is_configured": false, 00:17:42.305 "data_offset": 0, 00:17:42.305 "data_size": 0 00:17:42.305 }, 00:17:42.305 { 00:17:42.305 "name": "BaseBdev4", 00:17:42.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.305 "is_configured": false, 00:17:42.305 "data_offset": 0, 00:17:42.305 "data_size": 0 00:17:42.305 } 00:17:42.305 ] 00:17:42.305 }' 00:17:42.305 10:43:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.305 10:43:08 -- common/autotest_common.sh@10 -- # set +x 00:17:42.872 10:43:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.176 [2024-07-24 10:43:09.740869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.176 BaseBdev2 00:17:43.176 10:43:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:43.176 10:43:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:43.176 10:43:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.176 10:43:09 -- common/autotest_common.sh@889 -- # local i 00:17:43.176 10:43:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.176 10:43:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.176 10:43:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.434 10:43:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:43.693 [ 00:17:43.693 { 00:17:43.693 "name": "BaseBdev2", 00:17:43.693 "aliases": [ 00:17:43.693 "18d2a66e-9ef5-480a-9956-9142740b8139" 00:17:43.693 ], 00:17:43.693 "product_name": "Malloc disk", 00:17:43.693 "block_size": 512, 00:17:43.693 "num_blocks": 65536, 00:17:43.693 "uuid": "18d2a66e-9ef5-480a-9956-9142740b8139", 00:17:43.693 "assigned_rate_limits": { 00:17:43.693 "rw_ios_per_sec": 0, 00:17:43.693 "rw_mbytes_per_sec": 0, 00:17:43.693 "r_mbytes_per_sec": 0, 00:17:43.693 "w_mbytes_per_sec": 0 00:17:43.693 }, 00:17:43.693 "claimed": true, 00:17:43.693 "claim_type": "exclusive_write", 00:17:43.693 "zoned": false, 00:17:43.693 "supported_io_types": { 00:17:43.693 "read": true, 00:17:43.693 "write": true, 00:17:43.693 "unmap": true, 00:17:43.693 "write_zeroes": true, 00:17:43.693 "flush": true, 00:17:43.693 "reset": true, 00:17:43.693 "compare": false, 00:17:43.693 "compare_and_write": false, 00:17:43.693 "abort": true, 00:17:43.693 "nvme_admin": false, 00:17:43.693 "nvme_io": false 00:17:43.693 }, 00:17:43.693 "memory_domains": [ 00:17:43.693 { 00:17:43.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.693 "dma_device_type": 2 00:17:43.693 } 00:17:43.693 ], 00:17:43.693 "driver_specific": {} 00:17:43.693 } 00:17:43.693 ] 00:17:43.693 10:43:10 -- common/autotest_common.sh@895 -- # return 0 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.693 10:43:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.952 10:43:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.952 "name": "Existed_Raid", 00:17:43.952 "uuid": "faa8e3d5-4b39-44bb-b3c2-ed899e5e4217", 00:17:43.952 "strip_size_kb": 64, 00:17:43.952 "state": "configuring", 00:17:43.952 "raid_level": "raid0", 00:17:43.952 "superblock": true, 00:17:43.952 "num_base_bdevs": 4, 00:17:43.952 "num_base_bdevs_discovered": 2, 00:17:43.952 "num_base_bdevs_operational": 4, 00:17:43.952 "base_bdevs_list": [ 00:17:43.952 { 00:17:43.952 "name": "BaseBdev1", 00:17:43.952 "uuid": "daaf4454-f99c-498d-b6a5-2114d509a40a", 00:17:43.952 "is_configured": true, 00:17:43.952 "data_offset": 2048, 00:17:43.952 "data_size": 63488 00:17:43.952 }, 00:17:43.952 { 00:17:43.952 "name": "BaseBdev2", 00:17:43.952 "uuid": "18d2a66e-9ef5-480a-9956-9142740b8139", 00:17:43.952 "is_configured": true, 00:17:43.952 "data_offset": 2048, 00:17:43.952 "data_size": 63488 00:17:43.952 }, 00:17:43.952 { 00:17:43.952 "name": "BaseBdev3", 00:17:43.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.952 "is_configured": false, 00:17:43.952 "data_offset": 0, 00:17:43.952 "data_size": 0 00:17:43.952 }, 00:17:43.952 { 00:17:43.952 "name": "BaseBdev4", 00:17:43.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.952 "is_configured": false, 00:17:43.952 "data_offset": 0, 00:17:43.952 "data_size": 0 00:17:43.952 } 00:17:43.952 ] 00:17:43.952 }' 00:17:43.952 10:43:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.952 10:43:10 -- common/autotest_common.sh@10 -- # set +x 00:17:44.518 10:43:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:44.775 [2024-07-24 10:43:11.369556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.775 BaseBdev3 00:17:44.775 10:43:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:44.775 10:43:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:44.775 10:43:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:44.775 10:43:11 -- common/autotest_common.sh@889 -- # local i 00:17:44.775 10:43:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:44.775 10:43:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:44.775 10:43:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.033 10:43:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.292 [ 00:17:45.292 { 00:17:45.292 "name": "BaseBdev3", 00:17:45.292 "aliases": [ 00:17:45.292 "76b3f87b-73fd-4556-9516-90c50aaee400" 00:17:45.292 ], 00:17:45.292 "product_name": "Malloc disk", 00:17:45.292 "block_size": 512, 00:17:45.292 "num_blocks": 65536, 00:17:45.292 "uuid": "76b3f87b-73fd-4556-9516-90c50aaee400", 00:17:45.292 "assigned_rate_limits": { 00:17:45.292 "rw_ios_per_sec": 0, 00:17:45.292 "rw_mbytes_per_sec": 0, 00:17:45.292 "r_mbytes_per_sec": 0, 00:17:45.292 "w_mbytes_per_sec": 0 00:17:45.292 }, 00:17:45.292 "claimed": true, 00:17:45.292 "claim_type": "exclusive_write", 00:17:45.292 "zoned": false, 00:17:45.292 "supported_io_types": { 00:17:45.292 "read": true, 00:17:45.292 "write": true, 00:17:45.292 "unmap": true, 00:17:45.292 "write_zeroes": true, 00:17:45.292 "flush": true, 00:17:45.292 "reset": true, 00:17:45.292 "compare": false, 00:17:45.292 "compare_and_write": false, 00:17:45.292 "abort": true, 00:17:45.292 "nvme_admin": false, 00:17:45.292 "nvme_io": false 00:17:45.292 }, 00:17:45.292 "memory_domains": [ 00:17:45.292 { 00:17:45.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.292 "dma_device_type": 2 00:17:45.292 } 00:17:45.292 ], 00:17:45.292 "driver_specific": {} 00:17:45.292 } 00:17:45.292 ] 00:17:45.292 10:43:11 -- common/autotest_common.sh@895 -- # return 0 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.292 10:43:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.551 10:43:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.551 "name": "Existed_Raid", 00:17:45.551 "uuid": "faa8e3d5-4b39-44bb-b3c2-ed899e5e4217", 00:17:45.551 "strip_size_kb": 64, 00:17:45.551 "state": "configuring", 00:17:45.551 "raid_level": "raid0", 00:17:45.551 "superblock": true, 00:17:45.551 "num_base_bdevs": 4, 00:17:45.551 "num_base_bdevs_discovered": 3, 00:17:45.551 "num_base_bdevs_operational": 4, 00:17:45.551 "base_bdevs_list": [ 00:17:45.551 { 00:17:45.551 "name": "BaseBdev1", 00:17:45.551 "uuid": "daaf4454-f99c-498d-b6a5-2114d509a40a", 00:17:45.551 "is_configured": true, 00:17:45.551 "data_offset": 2048, 00:17:45.551 "data_size": 63488 00:17:45.551 }, 00:17:45.551 { 00:17:45.551 "name": "BaseBdev2", 00:17:45.551 "uuid": "18d2a66e-9ef5-480a-9956-9142740b8139", 00:17:45.551 "is_configured": true, 00:17:45.551 "data_offset": 2048, 00:17:45.551 "data_size": 63488 00:17:45.551 }, 00:17:45.551 { 00:17:45.551 "name": "BaseBdev3", 00:17:45.551 "uuid": "76b3f87b-73fd-4556-9516-90c50aaee400", 00:17:45.551 "is_configured": true, 00:17:45.551 "data_offset": 2048, 00:17:45.551 "data_size": 63488 00:17:45.551 }, 00:17:45.551 { 00:17:45.551 "name": "BaseBdev4", 00:17:45.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.551 "is_configured": false, 00:17:45.551 "data_offset": 0, 00:17:45.551 "data_size": 0 00:17:45.551 } 00:17:45.551 ] 00:17:45.551 }' 00:17:45.551 10:43:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.551 10:43:12 -- common/autotest_common.sh@10 -- # set +x 00:17:46.118 10:43:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:46.375 [2024-07-24 10:43:13.020569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.376 [2024-07-24 10:43:13.021230] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:17:46.376 [2024-07-24 10:43:13.021366] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:46.376 [2024-07-24 10:43:13.021568] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:17:46.376 [2024-07-24 10:43:13.022066] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:17:46.376 [2024-07-24 10:43:13.022199] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:17:46.376 [2024-07-24 10:43:13.022508] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.376 BaseBdev4 00:17:46.376 10:43:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:46.376 10:43:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:17:46.376 10:43:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:46.376 10:43:13 -- common/autotest_common.sh@889 -- # local i 00:17:46.376 10:43:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:46.376 10:43:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:46.376 10:43:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.633 10:43:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:46.901 [ 00:17:46.901 { 00:17:46.901 "name": "BaseBdev4", 00:17:46.901 "aliases": [ 00:17:46.901 "3fd768e8-8ba1-4fac-a0dd-1135d8c4467d" 00:17:46.901 ], 00:17:46.901 "product_name": "Malloc disk", 00:17:46.901 "block_size": 512, 00:17:46.901 "num_blocks": 65536, 00:17:46.901 "uuid": "3fd768e8-8ba1-4fac-a0dd-1135d8c4467d", 00:17:46.901 "assigned_rate_limits": { 00:17:46.901 "rw_ios_per_sec": 0, 00:17:46.901 "rw_mbytes_per_sec": 0, 00:17:46.901 "r_mbytes_per_sec": 0, 00:17:46.901 "w_mbytes_per_sec": 0 00:17:46.901 }, 00:17:46.901 "claimed": true, 00:17:46.901 "claim_type": "exclusive_write", 00:17:46.901 "zoned": false, 00:17:46.901 "supported_io_types": { 00:17:46.901 "read": true, 00:17:46.901 "write": true, 00:17:46.901 "unmap": true, 00:17:46.901 "write_zeroes": true, 00:17:46.901 "flush": true, 00:17:46.901 "reset": true, 00:17:46.901 "compare": false, 00:17:46.901 "compare_and_write": false, 00:17:46.901 "abort": true, 00:17:46.901 "nvme_admin": false, 00:17:46.901 "nvme_io": false 00:17:46.901 }, 00:17:46.901 "memory_domains": [ 00:17:46.901 { 00:17:46.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.901 "dma_device_type": 2 00:17:46.901 } 00:17:46.901 ], 00:17:46.901 "driver_specific": {} 00:17:46.901 } 00:17:46.901 ] 00:17:46.901 10:43:13 -- common/autotest_common.sh@895 -- # return 0 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.901 10:43:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.161 10:43:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.161 "name": "Existed_Raid", 00:17:47.162 "uuid": "faa8e3d5-4b39-44bb-b3c2-ed899e5e4217", 00:17:47.162 "strip_size_kb": 64, 00:17:47.162 "state": "online", 00:17:47.162 "raid_level": "raid0", 00:17:47.162 "superblock": true, 00:17:47.162 "num_base_bdevs": 4, 00:17:47.162 "num_base_bdevs_discovered": 4, 00:17:47.162 "num_base_bdevs_operational": 4, 00:17:47.162 "base_bdevs_list": [ 00:17:47.162 { 00:17:47.162 "name": "BaseBdev1", 00:17:47.162 "uuid": "daaf4454-f99c-498d-b6a5-2114d509a40a", 00:17:47.162 "is_configured": true, 00:17:47.162 "data_offset": 2048, 00:17:47.162 "data_size": 63488 00:17:47.162 }, 00:17:47.162 { 00:17:47.162 "name": "BaseBdev2", 00:17:47.162 "uuid": "18d2a66e-9ef5-480a-9956-9142740b8139", 00:17:47.162 "is_configured": true, 00:17:47.162 "data_offset": 2048, 00:17:47.162 "data_size": 63488 00:17:47.162 }, 00:17:47.162 { 00:17:47.162 "name": "BaseBdev3", 00:17:47.162 "uuid": "76b3f87b-73fd-4556-9516-90c50aaee400", 00:17:47.162 "is_configured": true, 00:17:47.162 "data_offset": 2048, 00:17:47.162 "data_size": 63488 00:17:47.162 }, 00:17:47.162 { 00:17:47.162 "name": "BaseBdev4", 00:17:47.162 "uuid": "3fd768e8-8ba1-4fac-a0dd-1135d8c4467d", 00:17:47.162 "is_configured": true, 00:17:47.162 "data_offset": 2048, 00:17:47.162 "data_size": 63488 00:17:47.162 } 00:17:47.162 ] 00:17:47.162 }' 00:17:47.162 10:43:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.162 10:43:13 -- common/autotest_common.sh@10 -- # set +x 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.102 [2024-07-24 10:43:14.653200] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.102 [2024-07-24 10:43:14.653542] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.102 [2024-07-24 10:43:14.653768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:48.102 10:43:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.103 10:43:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.362 10:43:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.362 "name": "Existed_Raid", 00:17:48.362 "uuid": "faa8e3d5-4b39-44bb-b3c2-ed899e5e4217", 00:17:48.362 "strip_size_kb": 64, 00:17:48.362 "state": "offline", 00:17:48.362 "raid_level": "raid0", 00:17:48.362 "superblock": true, 00:17:48.362 "num_base_bdevs": 4, 00:17:48.362 "num_base_bdevs_discovered": 3, 00:17:48.362 "num_base_bdevs_operational": 3, 00:17:48.362 "base_bdevs_list": [ 00:17:48.362 { 00:17:48.362 "name": null, 00:17:48.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.362 "is_configured": false, 00:17:48.362 "data_offset": 2048, 00:17:48.362 "data_size": 63488 00:17:48.362 }, 00:17:48.362 { 00:17:48.362 "name": "BaseBdev2", 00:17:48.362 "uuid": "18d2a66e-9ef5-480a-9956-9142740b8139", 00:17:48.362 "is_configured": true, 00:17:48.362 "data_offset": 2048, 00:17:48.362 "data_size": 63488 00:17:48.362 }, 00:17:48.362 { 00:17:48.362 "name": "BaseBdev3", 00:17:48.362 "uuid": "76b3f87b-73fd-4556-9516-90c50aaee400", 00:17:48.362 "is_configured": true, 00:17:48.362 "data_offset": 2048, 00:17:48.362 "data_size": 63488 00:17:48.362 }, 00:17:48.362 { 00:17:48.363 "name": "BaseBdev4", 00:17:48.363 "uuid": "3fd768e8-8ba1-4fac-a0dd-1135d8c4467d", 00:17:48.363 "is_configured": true, 00:17:48.363 "data_offset": 2048, 00:17:48.363 "data_size": 63488 00:17:48.363 } 00:17:48.363 ] 00:17:48.363 }' 00:17:48.363 10:43:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.363 10:43:14 -- common/autotest_common.sh@10 -- # set +x 00:17:48.928 10:43:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:48.928 10:43:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.928 10:43:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.928 10:43:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.186 10:43:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.186 10:43:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.186 10:43:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:49.444 [2024-07-24 10:43:16.123011] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:49.701 10:43:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:49.701 10:43:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.701 10:43:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.701 10:43:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.959 10:43:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.959 10:43:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.959 10:43:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.218 [2024-07-24 10:43:16.680104] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.218 10:43:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.218 10:43:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.218 10:43:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.218 10:43:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.477 10:43:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.477 10:43:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.477 10:43:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:50.735 [2024-07-24 10:43:17.269787] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:50.735 [2024-07-24 10:43:17.270084] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:17:50.735 10:43:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.735 10:43:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.735 10:43:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.735 10:43:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.993 10:43:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:50.993 10:43:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:50.993 10:43:17 -- bdev/bdev_raid.sh@287 -- # killprocess 129726 00:17:50.993 10:43:17 -- common/autotest_common.sh@926 -- # '[' -z 129726 ']' 00:17:50.993 10:43:17 -- common/autotest_common.sh@930 -- # kill -0 129726 00:17:50.993 10:43:17 -- common/autotest_common.sh@931 -- # uname 00:17:50.993 10:43:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.993 10:43:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129726 00:17:50.993 killing process with pid 129726 00:17:50.993 10:43:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.993 10:43:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.993 10:43:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129726' 00:17:50.993 10:43:17 -- common/autotest_common.sh@945 -- # kill 129726 00:17:50.993 10:43:17 -- common/autotest_common.sh@950 -- # wait 129726 00:17:50.993 [2024-07-24 10:43:17.615926] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.993 [2024-07-24 10:43:17.616079] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.559 10:43:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:51.559 00:17:51.559 real 0m15.285s 00:17:51.559 user 0m27.967s 00:17:51.559 sys 0m2.089s 00:17:51.559 10:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.559 10:43:17 -- common/autotest_common.sh@10 -- # set +x 00:17:51.559 ************************************ 00:17:51.559 END TEST raid_state_function_test_sb 00:17:51.559 ************************************ 00:17:51.559 10:43:17 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:51.559 10:43:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:51.559 10:43:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.559 10:43:17 -- common/autotest_common.sh@10 -- # set +x 00:17:51.559 ************************************ 00:17:51.559 START TEST raid_superblock_test 00:17:51.559 ************************************ 00:17:51.559 10:43:18 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=130182 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 130182 /var/tmp/spdk-raid.sock 00:17:51.559 10:43:18 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:51.559 10:43:18 -- common/autotest_common.sh@819 -- # '[' -z 130182 ']' 00:17:51.559 10:43:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.559 10:43:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.559 10:43:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.559 10:43:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.559 10:43:18 -- common/autotest_common.sh@10 -- # set +x 00:17:51.559 [2024-07-24 10:43:18.067677] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:51.559 [2024-07-24 10:43:18.068250] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130182 ] 00:17:51.559 [2024-07-24 10:43:18.222934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.835 [2024-07-24 10:43:18.353488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.835 [2024-07-24 10:43:18.434387] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.768 10:43:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.768 10:43:19 -- common/autotest_common.sh@852 -- # return 0 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:52.768 malloc1 00:17:52.768 10:43:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.026 [2024-07-24 10:43:19.567053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.026 [2024-07-24 10:43:19.567423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.026 [2024-07-24 10:43:19.567642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:17:53.026 [2024-07-24 10:43:19.567832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.026 [2024-07-24 10:43:19.571144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.026 [2024-07-24 10:43:19.571340] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.026 pt1 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.026 10:43:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:53.285 malloc2 00:17:53.285 10:43:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.543 [2024-07-24 10:43:20.066580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.543 [2024-07-24 10:43:20.066996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.543 [2024-07-24 10:43:20.067183] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:17:53.543 [2024-07-24 10:43:20.067353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.543 [2024-07-24 10:43:20.070281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.543 [2024-07-24 10:43:20.070492] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.543 pt2 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.543 10:43:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:53.801 malloc3 00:17:53.801 10:43:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:54.059 [2024-07-24 10:43:20.594649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:54.059 [2024-07-24 10:43:20.594966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.059 [2024-07-24 10:43:20.595151] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:54.059 [2024-07-24 10:43:20.595321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.059 [2024-07-24 10:43:20.598375] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.059 [2024-07-24 10:43:20.598566] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:54.059 pt3 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.059 10:43:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:54.316 malloc4 00:17:54.316 10:43:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:54.574 [2024-07-24 10:43:21.110973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:54.574 [2024-07-24 10:43:21.111494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.574 [2024-07-24 10:43:21.111702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.574 [2024-07-24 10:43:21.111886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.574 [2024-07-24 10:43:21.114923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.574 [2024-07-24 10:43:21.115135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:54.574 pt4 00:17:54.574 10:43:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:54.574 10:43:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:54.575 10:43:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:54.832 [2024-07-24 10:43:21.379782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.832 [2024-07-24 10:43:21.382802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.832 [2024-07-24 10:43:21.383060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:54.832 [2024-07-24 10:43:21.383280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:54.832 [2024-07-24 10:43:21.383793] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:17:54.832 [2024-07-24 10:43:21.383985] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:54.832 [2024-07-24 10:43:21.384201] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:17:54.832 [2024-07-24 10:43:21.384742] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:17:54.832 [2024-07-24 10:43:21.384918] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:17:54.832 [2024-07-24 10:43:21.385274] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.832 10:43:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.090 10:43:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.091 "name": "raid_bdev1", 00:17:55.091 "uuid": "63d521b1-e312-4b72-af03-49e1e1ef9cc3", 00:17:55.091 "strip_size_kb": 64, 00:17:55.091 "state": "online", 00:17:55.091 "raid_level": "raid0", 00:17:55.091 "superblock": true, 00:17:55.091 "num_base_bdevs": 4, 00:17:55.091 "num_base_bdevs_discovered": 4, 00:17:55.091 "num_base_bdevs_operational": 4, 00:17:55.091 "base_bdevs_list": [ 00:17:55.091 { 00:17:55.091 "name": "pt1", 00:17:55.091 "uuid": "20f3d71b-d704-5d58-9610-66a1155dc003", 00:17:55.091 "is_configured": true, 00:17:55.091 "data_offset": 2048, 00:17:55.091 "data_size": 63488 00:17:55.091 }, 00:17:55.091 { 00:17:55.091 "name": "pt2", 00:17:55.091 "uuid": "755cfee1-0a65-5424-a845-790e86b6f483", 00:17:55.091 "is_configured": true, 00:17:55.091 "data_offset": 2048, 00:17:55.091 "data_size": 63488 00:17:55.091 }, 00:17:55.091 { 00:17:55.091 "name": "pt3", 00:17:55.091 "uuid": "ce27a1e2-f4e4-5406-af77-ff4bb1315f81", 00:17:55.091 "is_configured": true, 00:17:55.091 "data_offset": 2048, 00:17:55.091 "data_size": 63488 00:17:55.091 }, 00:17:55.091 { 00:17:55.091 "name": "pt4", 00:17:55.091 "uuid": "a0113625-c985-5696-8995-cb4a8630cd0f", 00:17:55.091 "is_configured": true, 00:17:55.091 "data_offset": 2048, 00:17:55.091 "data_size": 63488 00:17:55.091 } 00:17:55.091 ] 00:17:55.091 }' 00:17:55.091 10:43:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.091 10:43:21 -- common/autotest_common.sh@10 -- # set +x 00:17:55.657 10:43:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:55.657 10:43:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.915 [2024-07-24 10:43:22.484422] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.915 10:43:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=63d521b1-e312-4b72-af03-49e1e1ef9cc3 00:17:55.915 10:43:22 -- bdev/bdev_raid.sh@380 -- # '[' -z 63d521b1-e312-4b72-af03-49e1e1ef9cc3 ']' 00:17:55.915 10:43:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.174 [2024-07-24 10:43:22.768200] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.175 [2024-07-24 10:43:22.768548] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.175 [2024-07-24 10:43:22.768858] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.175 [2024-07-24 10:43:22.769079] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.175 [2024-07-24 10:43:22.769217] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:17:56.175 10:43:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.175 10:43:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:56.432 10:43:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:56.432 10:43:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:56.432 10:43:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:56.432 10:43:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:56.689 10:43:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:56.689 10:43:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:56.946 10:43:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:56.946 10:43:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:57.204 10:43:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.204 10:43:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:57.461 10:43:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:57.461 10:43:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:57.717 10:43:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:57.717 10:43:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:57.717 10:43:24 -- common/autotest_common.sh@640 -- # local es=0 00:17:57.717 10:43:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:57.717 10:43:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.717 10:43:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.717 10:43:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.717 10:43:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.717 10:43:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.717 10:43:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.717 10:43:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:57.717 10:43:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:57.717 10:43:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:57.975 [2024-07-24 10:43:24.464559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:57.975 [2024-07-24 10:43:24.467368] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:57.975 [2024-07-24 10:43:24.467574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:57.975 [2024-07-24 10:43:24.467775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:57.975 [2024-07-24 10:43:24.467977] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:57.975 [2024-07-24 10:43:24.468204] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:57.975 [2024-07-24 10:43:24.468376] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:57.975 [2024-07-24 10:43:24.468555] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:57.975 [2024-07-24 10:43:24.468723] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.975 [2024-07-24 10:43:24.468834] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:17:57.975 request: 00:17:57.975 { 00:17:57.975 "name": "raid_bdev1", 00:17:57.975 "raid_level": "raid0", 00:17:57.975 "base_bdevs": [ 00:17:57.975 "malloc1", 00:17:57.975 "malloc2", 00:17:57.975 "malloc3", 00:17:57.975 "malloc4" 00:17:57.975 ], 00:17:57.975 "superblock": false, 00:17:57.975 "strip_size_kb": 64, 00:17:57.975 "method": "bdev_raid_create", 00:17:57.975 "req_id": 1 00:17:57.975 } 00:17:57.975 Got JSON-RPC error response 00:17:57.975 response: 00:17:57.975 { 00:17:57.975 "code": -17, 00:17:57.975 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:57.975 } 00:17:57.975 10:43:24 -- common/autotest_common.sh@643 -- # es=1 00:17:57.975 10:43:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:57.975 10:43:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:57.975 10:43:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:57.975 10:43:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.975 10:43:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:58.234 10:43:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:58.234 10:43:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:58.234 10:43:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.492 [2024-07-24 10:43:24.953313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.492 [2024-07-24 10:43:24.953769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.492 [2024-07-24 10:43:24.953864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:58.492 [2024-07-24 10:43:24.954207] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.492 [2024-07-24 10:43:24.957150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.492 [2024-07-24 10:43:24.957366] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.492 [2024-07-24 10:43:24.957621] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:58.492 [2024-07-24 10:43:24.957866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.492 pt1 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.492 10:43:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.751 10:43:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.751 "name": "raid_bdev1", 00:17:58.751 "uuid": "63d521b1-e312-4b72-af03-49e1e1ef9cc3", 00:17:58.751 "strip_size_kb": 64, 00:17:58.751 "state": "configuring", 00:17:58.751 "raid_level": "raid0", 00:17:58.751 "superblock": true, 00:17:58.751 "num_base_bdevs": 4, 00:17:58.751 "num_base_bdevs_discovered": 1, 00:17:58.751 "num_base_bdevs_operational": 4, 00:17:58.751 "base_bdevs_list": [ 00:17:58.751 { 00:17:58.751 "name": "pt1", 00:17:58.751 "uuid": "20f3d71b-d704-5d58-9610-66a1155dc003", 00:17:58.751 "is_configured": true, 00:17:58.751 "data_offset": 2048, 00:17:58.751 "data_size": 63488 00:17:58.751 }, 00:17:58.751 { 00:17:58.751 "name": null, 00:17:58.751 "uuid": "755cfee1-0a65-5424-a845-790e86b6f483", 00:17:58.751 "is_configured": false, 00:17:58.751 "data_offset": 2048, 00:17:58.751 "data_size": 63488 00:17:58.751 }, 00:17:58.751 { 00:17:58.751 "name": null, 00:17:58.751 "uuid": "ce27a1e2-f4e4-5406-af77-ff4bb1315f81", 00:17:58.751 "is_configured": false, 00:17:58.751 "data_offset": 2048, 00:17:58.751 "data_size": 63488 00:17:58.751 }, 00:17:58.751 { 00:17:58.751 "name": null, 00:17:58.751 "uuid": "a0113625-c985-5696-8995-cb4a8630cd0f", 00:17:58.751 "is_configured": false, 00:17:58.751 "data_offset": 2048, 00:17:58.751 "data_size": 63488 00:17:58.751 } 00:17:58.751 ] 00:17:58.751 }' 00:17:58.751 10:43:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.751 10:43:25 -- common/autotest_common.sh@10 -- # set +x 00:17:59.315 10:43:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:59.315 10:43:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.572 [2024-07-24 10:43:26.030082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.572 [2024-07-24 10:43:26.030536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.572 [2024-07-24 10:43:26.030730] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:59.572 [2024-07-24 10:43:26.030872] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.572 [2024-07-24 10:43:26.031479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.572 [2024-07-24 10:43:26.031696] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.572 [2024-07-24 10:43:26.031957] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:59.572 [2024-07-24 10:43:26.032108] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.573 pt2 00:17:59.573 10:43:26 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:59.830 [2024-07-24 10:43:26.330232] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:59.830 10:43:26 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:59.830 10:43:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:59.830 10:43:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:59.830 10:43:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.831 10:43:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.088 10:43:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.088 "name": "raid_bdev1", 00:18:00.088 "uuid": "63d521b1-e312-4b72-af03-49e1e1ef9cc3", 00:18:00.088 "strip_size_kb": 64, 00:18:00.088 "state": "configuring", 00:18:00.088 "raid_level": "raid0", 00:18:00.088 "superblock": true, 00:18:00.088 "num_base_bdevs": 4, 00:18:00.088 "num_base_bdevs_discovered": 1, 00:18:00.088 "num_base_bdevs_operational": 4, 00:18:00.088 "base_bdevs_list": [ 00:18:00.088 { 00:18:00.088 "name": "pt1", 00:18:00.088 "uuid": "20f3d71b-d704-5d58-9610-66a1155dc003", 00:18:00.088 "is_configured": true, 00:18:00.088 "data_offset": 2048, 00:18:00.088 "data_size": 63488 00:18:00.088 }, 00:18:00.088 { 00:18:00.088 "name": null, 00:18:00.088 "uuid": "755cfee1-0a65-5424-a845-790e86b6f483", 00:18:00.088 "is_configured": false, 00:18:00.088 "data_offset": 2048, 00:18:00.088 "data_size": 63488 00:18:00.088 }, 00:18:00.088 { 00:18:00.088 "name": null, 00:18:00.088 "uuid": "ce27a1e2-f4e4-5406-af77-ff4bb1315f81", 00:18:00.088 "is_configured": false, 00:18:00.088 "data_offset": 2048, 00:18:00.088 "data_size": 63488 00:18:00.088 }, 00:18:00.088 { 00:18:00.088 "name": null, 00:18:00.088 "uuid": "a0113625-c985-5696-8995-cb4a8630cd0f", 00:18:00.088 "is_configured": false, 00:18:00.088 "data_offset": 2048, 00:18:00.088 "data_size": 63488 00:18:00.088 } 00:18:00.088 ] 00:18:00.088 }' 00:18:00.088 10:43:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.088 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:18:00.654 10:43:27 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:00.654 10:43:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:00.654 10:43:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.912 [2024-07-24 10:43:27.506537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.912 [2024-07-24 10:43:27.506913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.912 [2024-07-24 10:43:27.507011] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:00.912 [2024-07-24 10:43:27.507318] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.912 [2024-07-24 10:43:27.507946] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.912 [2024-07-24 10:43:27.508136] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.912 [2024-07-24 10:43:27.508367] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:00.912 [2024-07-24 10:43:27.508516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.912 pt2 00:18:00.912 10:43:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:00.912 10:43:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:00.912 10:43:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.171 [2024-07-24 10:43:27.754677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.171 [2024-07-24 10:43:27.755114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.171 [2024-07-24 10:43:27.755284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:01.171 [2024-07-24 10:43:27.755451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.171 [2024-07-24 10:43:27.756168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.171 [2024-07-24 10:43:27.756367] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.171 [2024-07-24 10:43:27.756593] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:01.171 [2024-07-24 10:43:27.756734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.171 pt3 00:18:01.171 10:43:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:01.171 10:43:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:01.171 10:43:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.430 [2024-07-24 10:43:27.986699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.430 [2024-07-24 10:43:27.987088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.430 [2024-07-24 10:43:27.987180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:01.430 [2024-07-24 10:43:27.987482] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.430 [2024-07-24 10:43:27.988140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.430 [2024-07-24 10:43:27.988360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.430 [2024-07-24 10:43:27.988603] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:01.430 [2024-07-24 10:43:27.988762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.430 [2024-07-24 10:43:27.988996] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:01.430 [2024-07-24 10:43:27.989142] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:01.430 [2024-07-24 10:43:27.989366] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:18:01.430 [2024-07-24 10:43:27.989937] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:01.430 [2024-07-24 10:43:27.990063] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:01.430 [2024-07-24 10:43:27.990301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.430 pt4 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.430 10:43:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.688 10:43:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.688 "name": "raid_bdev1", 00:18:01.688 "uuid": "63d521b1-e312-4b72-af03-49e1e1ef9cc3", 00:18:01.688 "strip_size_kb": 64, 00:18:01.688 "state": "online", 00:18:01.688 "raid_level": "raid0", 00:18:01.688 "superblock": true, 00:18:01.688 "num_base_bdevs": 4, 00:18:01.688 "num_base_bdevs_discovered": 4, 00:18:01.688 "num_base_bdevs_operational": 4, 00:18:01.688 "base_bdevs_list": [ 00:18:01.688 { 00:18:01.688 "name": "pt1", 00:18:01.688 "uuid": "20f3d71b-d704-5d58-9610-66a1155dc003", 00:18:01.688 "is_configured": true, 00:18:01.688 "data_offset": 2048, 00:18:01.688 "data_size": 63488 00:18:01.688 }, 00:18:01.688 { 00:18:01.688 "name": "pt2", 00:18:01.688 "uuid": "755cfee1-0a65-5424-a845-790e86b6f483", 00:18:01.688 "is_configured": true, 00:18:01.688 "data_offset": 2048, 00:18:01.688 "data_size": 63488 00:18:01.688 }, 00:18:01.688 { 00:18:01.688 "name": "pt3", 00:18:01.688 "uuid": "ce27a1e2-f4e4-5406-af77-ff4bb1315f81", 00:18:01.688 "is_configured": true, 00:18:01.688 "data_offset": 2048, 00:18:01.688 "data_size": 63488 00:18:01.688 }, 00:18:01.688 { 00:18:01.688 "name": "pt4", 00:18:01.688 "uuid": "a0113625-c985-5696-8995-cb4a8630cd0f", 00:18:01.688 "is_configured": true, 00:18:01.688 "data_offset": 2048, 00:18:01.688 "data_size": 63488 00:18:01.688 } 00:18:01.688 ] 00:18:01.688 }' 00:18:01.688 10:43:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.688 10:43:28 -- common/autotest_common.sh@10 -- # set +x 00:18:02.623 10:43:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:02.623 10:43:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:02.623 [2024-07-24 10:43:29.183215] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.623 10:43:29 -- bdev/bdev_raid.sh@430 -- # '[' 63d521b1-e312-4b72-af03-49e1e1ef9cc3 '!=' 63d521b1-e312-4b72-af03-49e1e1ef9cc3 ']' 00:18:02.623 10:43:29 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:02.623 10:43:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:02.623 10:43:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:02.623 10:43:29 -- bdev/bdev_raid.sh@511 -- # killprocess 130182 00:18:02.623 10:43:29 -- common/autotest_common.sh@926 -- # '[' -z 130182 ']' 00:18:02.623 10:43:29 -- common/autotest_common.sh@930 -- # kill -0 130182 00:18:02.623 10:43:29 -- common/autotest_common.sh@931 -- # uname 00:18:02.623 10:43:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.623 10:43:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130182 00:18:02.623 10:43:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.623 10:43:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.623 10:43:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130182' 00:18:02.623 killing process with pid 130182 00:18:02.623 10:43:29 -- common/autotest_common.sh@945 -- # kill 130182 00:18:02.623 10:43:29 -- common/autotest_common.sh@950 -- # wait 130182 00:18:02.623 [2024-07-24 10:43:29.227898] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.623 [2024-07-24 10:43:29.228018] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.623 [2024-07-24 10:43:29.228111] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.623 [2024-07-24 10:43:29.228171] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:02.623 [2024-07-24 10:43:29.291141] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:03.189 00:18:03.189 real 0m11.622s 00:18:03.189 user 0m20.975s 00:18:03.189 sys 0m1.563s 00:18:03.189 10:43:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.189 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:18:03.189 ************************************ 00:18:03.189 END TEST raid_superblock_test 00:18:03.189 ************************************ 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:03.189 10:43:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:03.189 10:43:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.189 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:18:03.189 ************************************ 00:18:03.189 START TEST raid_state_function_test 00:18:03.189 ************************************ 00:18:03.189 10:43:29 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=130510 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:03.189 Process raid pid: 130510 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130510' 00:18:03.189 10:43:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130510 /var/tmp/spdk-raid.sock 00:18:03.189 10:43:29 -- common/autotest_common.sh@819 -- # '[' -z 130510 ']' 00:18:03.189 10:43:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.189 10:43:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.189 10:43:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.189 10:43:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.189 10:43:29 -- common/autotest_common.sh@10 -- # set +x 00:18:03.189 [2024-07-24 10:43:29.748654] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:03.190 [2024-07-24 10:43:29.749137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.448 [2024-07-24 10:43:29.893607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.448 [2024-07-24 10:43:30.014788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.448 [2024-07-24 10:43:30.091929] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:04.013 10:43:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.013 10:43:30 -- common/autotest_common.sh@852 -- # return 0 00:18:04.013 10:43:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:04.273 [2024-07-24 10:43:30.900987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:04.273 [2024-07-24 10:43:30.901367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:04.273 [2024-07-24 10:43:30.901495] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:04.273 [2024-07-24 10:43:30.901657] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:04.273 [2024-07-24 10:43:30.901768] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:04.273 [2024-07-24 10:43:30.901981] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:04.273 [2024-07-24 10:43:30.902117] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:04.273 [2024-07-24 10:43:30.902194] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:04.273 10:43:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.534 10:43:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.534 "name": "Existed_Raid", 00:18:04.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.534 "strip_size_kb": 64, 00:18:04.534 "state": "configuring", 00:18:04.534 "raid_level": "concat", 00:18:04.534 "superblock": false, 00:18:04.534 "num_base_bdevs": 4, 00:18:04.534 "num_base_bdevs_discovered": 0, 00:18:04.534 "num_base_bdevs_operational": 4, 00:18:04.534 "base_bdevs_list": [ 00:18:04.534 { 00:18:04.534 "name": "BaseBdev1", 00:18:04.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.534 "is_configured": false, 00:18:04.534 "data_offset": 0, 00:18:04.534 "data_size": 0 00:18:04.534 }, 00:18:04.534 { 00:18:04.534 "name": "BaseBdev2", 00:18:04.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.534 "is_configured": false, 00:18:04.534 "data_offset": 0, 00:18:04.534 "data_size": 0 00:18:04.534 }, 00:18:04.534 { 00:18:04.534 "name": "BaseBdev3", 00:18:04.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.534 "is_configured": false, 00:18:04.534 "data_offset": 0, 00:18:04.534 "data_size": 0 00:18:04.534 }, 00:18:04.534 { 00:18:04.534 "name": "BaseBdev4", 00:18:04.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.534 "is_configured": false, 00:18:04.534 "data_offset": 0, 00:18:04.534 "data_size": 0 00:18:04.534 } 00:18:04.534 ] 00:18:04.534 }' 00:18:04.792 10:43:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.792 10:43:31 -- common/autotest_common.sh@10 -- # set +x 00:18:05.359 10:43:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:05.617 [2024-07-24 10:43:32.125109] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:05.617 [2024-07-24 10:43:32.125460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:05.617 10:43:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.876 [2024-07-24 10:43:32.385252] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.876 [2024-07-24 10:43:32.385542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.876 [2024-07-24 10:43:32.385671] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.876 [2024-07-24 10:43:32.385817] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.876 [2024-07-24 10:43:32.385940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.876 [2024-07-24 10:43:32.386096] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.876 [2024-07-24 10:43:32.386206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:05.876 [2024-07-24 10:43:32.386349] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:05.876 10:43:32 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:06.134 [2024-07-24 10:43:32.665319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:06.134 BaseBdev1 00:18:06.134 10:43:32 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:06.134 10:43:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:06.134 10:43:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:06.134 10:43:32 -- common/autotest_common.sh@889 -- # local i 00:18:06.134 10:43:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:06.134 10:43:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:06.134 10:43:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.393 10:43:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:06.651 [ 00:18:06.651 { 00:18:06.651 "name": "BaseBdev1", 00:18:06.651 "aliases": [ 00:18:06.651 "9471fb8e-86b8-4b55-a7cc-9db9990a64ea" 00:18:06.651 ], 00:18:06.651 "product_name": "Malloc disk", 00:18:06.651 "block_size": 512, 00:18:06.651 "num_blocks": 65536, 00:18:06.651 "uuid": "9471fb8e-86b8-4b55-a7cc-9db9990a64ea", 00:18:06.651 "assigned_rate_limits": { 00:18:06.651 "rw_ios_per_sec": 0, 00:18:06.651 "rw_mbytes_per_sec": 0, 00:18:06.651 "r_mbytes_per_sec": 0, 00:18:06.651 "w_mbytes_per_sec": 0 00:18:06.651 }, 00:18:06.651 "claimed": true, 00:18:06.651 "claim_type": "exclusive_write", 00:18:06.651 "zoned": false, 00:18:06.651 "supported_io_types": { 00:18:06.651 "read": true, 00:18:06.651 "write": true, 00:18:06.651 "unmap": true, 00:18:06.651 "write_zeroes": true, 00:18:06.651 "flush": true, 00:18:06.651 "reset": true, 00:18:06.651 "compare": false, 00:18:06.651 "compare_and_write": false, 00:18:06.651 "abort": true, 00:18:06.651 "nvme_admin": false, 00:18:06.651 "nvme_io": false 00:18:06.651 }, 00:18:06.651 "memory_domains": [ 00:18:06.651 { 00:18:06.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.651 "dma_device_type": 2 00:18:06.651 } 00:18:06.651 ], 00:18:06.651 "driver_specific": {} 00:18:06.651 } 00:18:06.651 ] 00:18:06.651 10:43:33 -- common/autotest_common.sh@895 -- # return 0 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.651 10:43:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:06.910 10:43:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.910 "name": "Existed_Raid", 00:18:06.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.910 "strip_size_kb": 64, 00:18:06.910 "state": "configuring", 00:18:06.910 "raid_level": "concat", 00:18:06.911 "superblock": false, 00:18:06.911 "num_base_bdevs": 4, 00:18:06.911 "num_base_bdevs_discovered": 1, 00:18:06.911 "num_base_bdevs_operational": 4, 00:18:06.911 "base_bdevs_list": [ 00:18:06.911 { 00:18:06.911 "name": "BaseBdev1", 00:18:06.911 "uuid": "9471fb8e-86b8-4b55-a7cc-9db9990a64ea", 00:18:06.911 "is_configured": true, 00:18:06.911 "data_offset": 0, 00:18:06.911 "data_size": 65536 00:18:06.911 }, 00:18:06.911 { 00:18:06.911 "name": "BaseBdev2", 00:18:06.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.911 "is_configured": false, 00:18:06.911 "data_offset": 0, 00:18:06.911 "data_size": 0 00:18:06.911 }, 00:18:06.911 { 00:18:06.911 "name": "BaseBdev3", 00:18:06.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.911 "is_configured": false, 00:18:06.911 "data_offset": 0, 00:18:06.911 "data_size": 0 00:18:06.911 }, 00:18:06.911 { 00:18:06.911 "name": "BaseBdev4", 00:18:06.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.911 "is_configured": false, 00:18:06.911 "data_offset": 0, 00:18:06.911 "data_size": 0 00:18:06.911 } 00:18:06.911 ] 00:18:06.911 }' 00:18:06.911 10:43:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.911 10:43:33 -- common/autotest_common.sh@10 -- # set +x 00:18:07.477 10:43:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:07.736 [2024-07-24 10:43:34.245793] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:07.736 [2024-07-24 10:43:34.246199] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:07.736 10:43:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:07.736 10:43:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:07.994 [2024-07-24 10:43:34.517953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.994 [2024-07-24 10:43:34.520866] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.994 [2024-07-24 10:43:34.521129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.994 [2024-07-24 10:43:34.521270] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.994 [2024-07-24 10:43:34.521345] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.994 [2024-07-24 10:43:34.521452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.994 [2024-07-24 10:43:34.521518] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.994 10:43:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.253 10:43:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.253 "name": "Existed_Raid", 00:18:08.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.253 "strip_size_kb": 64, 00:18:08.253 "state": "configuring", 00:18:08.253 "raid_level": "concat", 00:18:08.253 "superblock": false, 00:18:08.253 "num_base_bdevs": 4, 00:18:08.253 "num_base_bdevs_discovered": 1, 00:18:08.253 "num_base_bdevs_operational": 4, 00:18:08.253 "base_bdevs_list": [ 00:18:08.253 { 00:18:08.253 "name": "BaseBdev1", 00:18:08.253 "uuid": "9471fb8e-86b8-4b55-a7cc-9db9990a64ea", 00:18:08.253 "is_configured": true, 00:18:08.253 "data_offset": 0, 00:18:08.253 "data_size": 65536 00:18:08.253 }, 00:18:08.253 { 00:18:08.253 "name": "BaseBdev2", 00:18:08.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.253 "is_configured": false, 00:18:08.253 "data_offset": 0, 00:18:08.253 "data_size": 0 00:18:08.253 }, 00:18:08.253 { 00:18:08.253 "name": "BaseBdev3", 00:18:08.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.253 "is_configured": false, 00:18:08.253 "data_offset": 0, 00:18:08.253 "data_size": 0 00:18:08.253 }, 00:18:08.253 { 00:18:08.253 "name": "BaseBdev4", 00:18:08.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.253 "is_configured": false, 00:18:08.253 "data_offset": 0, 00:18:08.253 "data_size": 0 00:18:08.253 } 00:18:08.253 ] 00:18:08.253 }' 00:18:08.253 10:43:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.253 10:43:34 -- common/autotest_common.sh@10 -- # set +x 00:18:08.821 10:43:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:09.387 [2024-07-24 10:43:35.785402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:09.387 BaseBdev2 00:18:09.387 10:43:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:09.387 10:43:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:09.387 10:43:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:09.387 10:43:35 -- common/autotest_common.sh@889 -- # local i 00:18:09.387 10:43:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:09.387 10:43:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:09.387 10:43:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:09.387 10:43:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:09.645 [ 00:18:09.645 { 00:18:09.645 "name": "BaseBdev2", 00:18:09.645 "aliases": [ 00:18:09.645 "20ce676f-9651-4286-9592-5c75ac728e54" 00:18:09.646 ], 00:18:09.646 "product_name": "Malloc disk", 00:18:09.646 "block_size": 512, 00:18:09.646 "num_blocks": 65536, 00:18:09.646 "uuid": "20ce676f-9651-4286-9592-5c75ac728e54", 00:18:09.646 "assigned_rate_limits": { 00:18:09.646 "rw_ios_per_sec": 0, 00:18:09.646 "rw_mbytes_per_sec": 0, 00:18:09.646 "r_mbytes_per_sec": 0, 00:18:09.646 "w_mbytes_per_sec": 0 00:18:09.646 }, 00:18:09.646 "claimed": true, 00:18:09.646 "claim_type": "exclusive_write", 00:18:09.646 "zoned": false, 00:18:09.646 "supported_io_types": { 00:18:09.646 "read": true, 00:18:09.646 "write": true, 00:18:09.646 "unmap": true, 00:18:09.646 "write_zeroes": true, 00:18:09.646 "flush": true, 00:18:09.646 "reset": true, 00:18:09.646 "compare": false, 00:18:09.646 "compare_and_write": false, 00:18:09.646 "abort": true, 00:18:09.646 "nvme_admin": false, 00:18:09.646 "nvme_io": false 00:18:09.646 }, 00:18:09.646 "memory_domains": [ 00:18:09.646 { 00:18:09.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.646 "dma_device_type": 2 00:18:09.646 } 00:18:09.646 ], 00:18:09.646 "driver_specific": {} 00:18:09.646 } 00:18:09.646 ] 00:18:09.646 10:43:36 -- common/autotest_common.sh@895 -- # return 0 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.646 10:43:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.905 10:43:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.905 "name": "Existed_Raid", 00:18:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.905 "strip_size_kb": 64, 00:18:09.905 "state": "configuring", 00:18:09.905 "raid_level": "concat", 00:18:09.905 "superblock": false, 00:18:09.905 "num_base_bdevs": 4, 00:18:09.905 "num_base_bdevs_discovered": 2, 00:18:09.905 "num_base_bdevs_operational": 4, 00:18:09.905 "base_bdevs_list": [ 00:18:09.905 { 00:18:09.905 "name": "BaseBdev1", 00:18:09.905 "uuid": "9471fb8e-86b8-4b55-a7cc-9db9990a64ea", 00:18:09.905 "is_configured": true, 00:18:09.905 "data_offset": 0, 00:18:09.905 "data_size": 65536 00:18:09.905 }, 00:18:09.905 { 00:18:09.905 "name": "BaseBdev2", 00:18:09.905 "uuid": "20ce676f-9651-4286-9592-5c75ac728e54", 00:18:09.905 "is_configured": true, 00:18:09.905 "data_offset": 0, 00:18:09.905 "data_size": 65536 00:18:09.905 }, 00:18:09.905 { 00:18:09.905 "name": "BaseBdev3", 00:18:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.905 "is_configured": false, 00:18:09.905 "data_offset": 0, 00:18:09.905 "data_size": 0 00:18:09.905 }, 00:18:09.905 { 00:18:09.905 "name": "BaseBdev4", 00:18:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.905 "is_configured": false, 00:18:09.905 "data_offset": 0, 00:18:09.905 "data_size": 0 00:18:09.905 } 00:18:09.905 ] 00:18:09.905 }' 00:18:09.905 10:43:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.905 10:43:36 -- common/autotest_common.sh@10 -- # set +x 00:18:10.839 10:43:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:10.839 [2024-07-24 10:43:37.374499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:10.839 BaseBdev3 00:18:10.839 10:43:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:10.839 10:43:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:10.839 10:43:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:10.839 10:43:37 -- common/autotest_common.sh@889 -- # local i 00:18:10.839 10:43:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:10.839 10:43:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:10.839 10:43:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.097 10:43:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:11.355 [ 00:18:11.355 { 00:18:11.355 "name": "BaseBdev3", 00:18:11.355 "aliases": [ 00:18:11.355 "7dd5d957-b74b-406e-ba48-3c2615f1e4dc" 00:18:11.355 ], 00:18:11.355 "product_name": "Malloc disk", 00:18:11.355 "block_size": 512, 00:18:11.355 "num_blocks": 65536, 00:18:11.355 "uuid": "7dd5d957-b74b-406e-ba48-3c2615f1e4dc", 00:18:11.355 "assigned_rate_limits": { 00:18:11.355 "rw_ios_per_sec": 0, 00:18:11.355 "rw_mbytes_per_sec": 0, 00:18:11.355 "r_mbytes_per_sec": 0, 00:18:11.355 "w_mbytes_per_sec": 0 00:18:11.355 }, 00:18:11.355 "claimed": true, 00:18:11.356 "claim_type": "exclusive_write", 00:18:11.356 "zoned": false, 00:18:11.356 "supported_io_types": { 00:18:11.356 "read": true, 00:18:11.356 "write": true, 00:18:11.356 "unmap": true, 00:18:11.356 "write_zeroes": true, 00:18:11.356 "flush": true, 00:18:11.356 "reset": true, 00:18:11.356 "compare": false, 00:18:11.356 "compare_and_write": false, 00:18:11.356 "abort": true, 00:18:11.356 "nvme_admin": false, 00:18:11.356 "nvme_io": false 00:18:11.356 }, 00:18:11.356 "memory_domains": [ 00:18:11.356 { 00:18:11.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.356 "dma_device_type": 2 00:18:11.356 } 00:18:11.356 ], 00:18:11.356 "driver_specific": {} 00:18:11.356 } 00:18:11.356 ] 00:18:11.356 10:43:37 -- common/autotest_common.sh@895 -- # return 0 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.356 10:43:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.614 10:43:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.614 "name": "Existed_Raid", 00:18:11.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.614 "strip_size_kb": 64, 00:18:11.614 "state": "configuring", 00:18:11.614 "raid_level": "concat", 00:18:11.614 "superblock": false, 00:18:11.614 "num_base_bdevs": 4, 00:18:11.614 "num_base_bdevs_discovered": 3, 00:18:11.614 "num_base_bdevs_operational": 4, 00:18:11.614 "base_bdevs_list": [ 00:18:11.614 { 00:18:11.614 "name": "BaseBdev1", 00:18:11.614 "uuid": "9471fb8e-86b8-4b55-a7cc-9db9990a64ea", 00:18:11.614 "is_configured": true, 00:18:11.614 "data_offset": 0, 00:18:11.614 "data_size": 65536 00:18:11.614 }, 00:18:11.614 { 00:18:11.614 "name": "BaseBdev2", 00:18:11.614 "uuid": "20ce676f-9651-4286-9592-5c75ac728e54", 00:18:11.614 "is_configured": true, 00:18:11.614 "data_offset": 0, 00:18:11.614 "data_size": 65536 00:18:11.614 }, 00:18:11.614 { 00:18:11.614 "name": "BaseBdev3", 00:18:11.614 "uuid": "7dd5d957-b74b-406e-ba48-3c2615f1e4dc", 00:18:11.614 "is_configured": true, 00:18:11.614 "data_offset": 0, 00:18:11.614 "data_size": 65536 00:18:11.614 }, 00:18:11.614 { 00:18:11.614 "name": "BaseBdev4", 00:18:11.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.614 "is_configured": false, 00:18:11.614 "data_offset": 0, 00:18:11.614 "data_size": 0 00:18:11.614 } 00:18:11.614 ] 00:18:11.614 }' 00:18:11.614 10:43:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.614 10:43:38 -- common/autotest_common.sh@10 -- # set +x 00:18:12.547 10:43:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:12.547 [2024-07-24 10:43:39.103408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:12.547 [2024-07-24 10:43:39.103873] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:12.547 [2024-07-24 10:43:39.104005] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:12.547 [2024-07-24 10:43:39.104300] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:12.547 [2024-07-24 10:43:39.104900] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:12.547 [2024-07-24 10:43:39.105048] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:12.547 [2024-07-24 10:43:39.105450] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.547 BaseBdev4 00:18:12.547 10:43:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:12.547 10:43:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:12.547 10:43:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:12.547 10:43:39 -- common/autotest_common.sh@889 -- # local i 00:18:12.547 10:43:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:12.547 10:43:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:12.547 10:43:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.818 10:43:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:13.110 [ 00:18:13.110 { 00:18:13.110 "name": "BaseBdev4", 00:18:13.110 "aliases": [ 00:18:13.110 "b919d304-14b4-4196-8696-936d4a433e85" 00:18:13.110 ], 00:18:13.110 "product_name": "Malloc disk", 00:18:13.110 "block_size": 512, 00:18:13.110 "num_blocks": 65536, 00:18:13.110 "uuid": "b919d304-14b4-4196-8696-936d4a433e85", 00:18:13.110 "assigned_rate_limits": { 00:18:13.110 "rw_ios_per_sec": 0, 00:18:13.110 "rw_mbytes_per_sec": 0, 00:18:13.110 "r_mbytes_per_sec": 0, 00:18:13.110 "w_mbytes_per_sec": 0 00:18:13.110 }, 00:18:13.110 "claimed": true, 00:18:13.110 "claim_type": "exclusive_write", 00:18:13.110 "zoned": false, 00:18:13.110 "supported_io_types": { 00:18:13.110 "read": true, 00:18:13.110 "write": true, 00:18:13.110 "unmap": true, 00:18:13.110 "write_zeroes": true, 00:18:13.110 "flush": true, 00:18:13.110 "reset": true, 00:18:13.110 "compare": false, 00:18:13.110 "compare_and_write": false, 00:18:13.110 "abort": true, 00:18:13.110 "nvme_admin": false, 00:18:13.110 "nvme_io": false 00:18:13.110 }, 00:18:13.110 "memory_domains": [ 00:18:13.110 { 00:18:13.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.110 "dma_device_type": 2 00:18:13.110 } 00:18:13.110 ], 00:18:13.110 "driver_specific": {} 00:18:13.110 } 00:18:13.110 ] 00:18:13.110 10:43:39 -- common/autotest_common.sh@895 -- # return 0 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.110 10:43:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.369 10:43:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.369 "name": "Existed_Raid", 00:18:13.369 "uuid": "b54afe81-28aa-4e90-bf6e-ee4367863693", 00:18:13.369 "strip_size_kb": 64, 00:18:13.369 "state": "online", 00:18:13.369 "raid_level": "concat", 00:18:13.369 "superblock": false, 00:18:13.369 "num_base_bdevs": 4, 00:18:13.369 "num_base_bdevs_discovered": 4, 00:18:13.369 "num_base_bdevs_operational": 4, 00:18:13.369 "base_bdevs_list": [ 00:18:13.369 { 00:18:13.369 "name": "BaseBdev1", 00:18:13.369 "uuid": "9471fb8e-86b8-4b55-a7cc-9db9990a64ea", 00:18:13.369 "is_configured": true, 00:18:13.369 "data_offset": 0, 00:18:13.369 "data_size": 65536 00:18:13.369 }, 00:18:13.369 { 00:18:13.369 "name": "BaseBdev2", 00:18:13.369 "uuid": "20ce676f-9651-4286-9592-5c75ac728e54", 00:18:13.369 "is_configured": true, 00:18:13.369 "data_offset": 0, 00:18:13.369 "data_size": 65536 00:18:13.369 }, 00:18:13.369 { 00:18:13.369 "name": "BaseBdev3", 00:18:13.369 "uuid": "7dd5d957-b74b-406e-ba48-3c2615f1e4dc", 00:18:13.369 "is_configured": true, 00:18:13.369 "data_offset": 0, 00:18:13.369 "data_size": 65536 00:18:13.369 }, 00:18:13.369 { 00:18:13.369 "name": "BaseBdev4", 00:18:13.369 "uuid": "b919d304-14b4-4196-8696-936d4a433e85", 00:18:13.369 "is_configured": true, 00:18:13.369 "data_offset": 0, 00:18:13.369 "data_size": 65536 00:18:13.369 } 00:18:13.369 ] 00:18:13.369 }' 00:18:13.369 10:43:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.369 10:43:39 -- common/autotest_common.sh@10 -- # set +x 00:18:13.936 10:43:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:14.503 [2024-07-24 10:43:40.896188] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.503 [2024-07-24 10:43:40.896437] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.503 [2024-07-24 10:43:40.896674] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.503 10:43:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.503 10:43:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.503 "name": "Existed_Raid", 00:18:14.503 "uuid": "b54afe81-28aa-4e90-bf6e-ee4367863693", 00:18:14.503 "strip_size_kb": 64, 00:18:14.503 "state": "offline", 00:18:14.503 "raid_level": "concat", 00:18:14.503 "superblock": false, 00:18:14.503 "num_base_bdevs": 4, 00:18:14.503 "num_base_bdevs_discovered": 3, 00:18:14.503 "num_base_bdevs_operational": 3, 00:18:14.503 "base_bdevs_list": [ 00:18:14.503 { 00:18:14.503 "name": null, 00:18:14.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.503 "is_configured": false, 00:18:14.503 "data_offset": 0, 00:18:14.503 "data_size": 65536 00:18:14.503 }, 00:18:14.503 { 00:18:14.504 "name": "BaseBdev2", 00:18:14.504 "uuid": "20ce676f-9651-4286-9592-5c75ac728e54", 00:18:14.504 "is_configured": true, 00:18:14.504 "data_offset": 0, 00:18:14.504 "data_size": 65536 00:18:14.504 }, 00:18:14.504 { 00:18:14.504 "name": "BaseBdev3", 00:18:14.504 "uuid": "7dd5d957-b74b-406e-ba48-3c2615f1e4dc", 00:18:14.504 "is_configured": true, 00:18:14.504 "data_offset": 0, 00:18:14.504 "data_size": 65536 00:18:14.504 }, 00:18:14.504 { 00:18:14.504 "name": "BaseBdev4", 00:18:14.504 "uuid": "b919d304-14b4-4196-8696-936d4a433e85", 00:18:14.504 "is_configured": true, 00:18:14.504 "data_offset": 0, 00:18:14.504 "data_size": 65536 00:18:14.504 } 00:18:14.504 ] 00:18:14.504 }' 00:18:14.504 10:43:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.504 10:43:41 -- common/autotest_common.sh@10 -- # set +x 00:18:15.438 10:43:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:15.438 10:43:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:15.438 10:43:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.438 10:43:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:15.438 10:43:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:15.438 10:43:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.438 10:43:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:15.697 [2024-07-24 10:43:42.344095] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.697 10:43:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:15.697 10:43:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:15.697 10:43:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:15.697 10:43:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.955 10:43:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:15.955 10:43:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:15.955 10:43:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:16.214 [2024-07-24 10:43:42.872242] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.472 10:43:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.472 10:43:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.472 10:43:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.472 10:43:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.731 10:43:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.731 10:43:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.731 10:43:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:16.988 [2024-07-24 10:43:43.434569] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:16.988 [2024-07-24 10:43:43.434915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:16.988 10:43:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.988 10:43:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.988 10:43:43 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.988 10:43:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.265 10:43:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:17.265 10:43:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:17.265 10:43:43 -- bdev/bdev_raid.sh@287 -- # killprocess 130510 00:18:17.265 10:43:43 -- common/autotest_common.sh@926 -- # '[' -z 130510 ']' 00:18:17.265 10:43:43 -- common/autotest_common.sh@930 -- # kill -0 130510 00:18:17.265 10:43:43 -- common/autotest_common.sh@931 -- # uname 00:18:17.265 10:43:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.265 10:43:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130510 00:18:17.265 10:43:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:17.265 10:43:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:17.265 10:43:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130510' 00:18:17.265 killing process with pid 130510 00:18:17.265 10:43:43 -- common/autotest_common.sh@945 -- # kill 130510 00:18:17.265 10:43:43 -- common/autotest_common.sh@950 -- # wait 130510 00:18:17.265 [2024-07-24 10:43:43.800611] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.265 [2024-07-24 10:43:43.800723] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.527 ************************************ 00:18:17.527 END TEST raid_state_function_test 00:18:17.527 ************************************ 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:17.527 00:18:17.527 real 0m14.363s 00:18:17.527 user 0m26.357s 00:18:17.527 sys 0m1.961s 00:18:17.527 10:43:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.527 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:17.527 10:43:44 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:17.527 10:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:17.527 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 ************************************ 00:18:17.527 START TEST raid_state_function_test_sb 00:18:17.527 ************************************ 00:18:17.527 10:43:44 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=130954 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130954' 00:18:17.527 Process raid pid: 130954 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:17.527 10:43:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130954 /var/tmp/spdk-raid.sock 00:18:17.527 10:43:44 -- common/autotest_common.sh@819 -- # '[' -z 130954 ']' 00:18:17.527 10:43:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:17.527 10:43:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:17.527 10:43:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:17.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:17.527 10:43:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:17.527 10:43:44 -- common/autotest_common.sh@10 -- # set +x 00:18:17.527 [2024-07-24 10:43:44.178439] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:17.527 [2024-07-24 10:43:44.178711] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.786 [2024-07-24 10:43:44.332012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.786 [2024-07-24 10:43:44.461475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.044 [2024-07-24 10:43:44.539283] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.610 10:43:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:18.610 10:43:45 -- common/autotest_common.sh@852 -- # return 0 00:18:18.610 10:43:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:18.868 [2024-07-24 10:43:45.395289] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:18.868 [2024-07-24 10:43:45.395461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:18.868 [2024-07-24 10:43:45.395494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.868 [2024-07-24 10:43:45.395516] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.868 [2024-07-24 10:43:45.395525] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.868 [2024-07-24 10:43:45.395588] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.868 [2024-07-24 10:43:45.395599] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.868 [2024-07-24 10:43:45.395629] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.868 10:43:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.126 10:43:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.126 "name": "Existed_Raid", 00:18:19.126 "uuid": "97e9ed93-7178-477d-97b1-7ae20f91a7ff", 00:18:19.126 "strip_size_kb": 64, 00:18:19.126 "state": "configuring", 00:18:19.126 "raid_level": "concat", 00:18:19.126 "superblock": true, 00:18:19.126 "num_base_bdevs": 4, 00:18:19.126 "num_base_bdevs_discovered": 0, 00:18:19.126 "num_base_bdevs_operational": 4, 00:18:19.126 "base_bdevs_list": [ 00:18:19.126 { 00:18:19.126 "name": "BaseBdev1", 00:18:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.126 "is_configured": false, 00:18:19.126 "data_offset": 0, 00:18:19.126 "data_size": 0 00:18:19.126 }, 00:18:19.126 { 00:18:19.126 "name": "BaseBdev2", 00:18:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.126 "is_configured": false, 00:18:19.126 "data_offset": 0, 00:18:19.126 "data_size": 0 00:18:19.126 }, 00:18:19.126 { 00:18:19.126 "name": "BaseBdev3", 00:18:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.126 "is_configured": false, 00:18:19.126 "data_offset": 0, 00:18:19.126 "data_size": 0 00:18:19.126 }, 00:18:19.126 { 00:18:19.126 "name": "BaseBdev4", 00:18:19.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.126 "is_configured": false, 00:18:19.126 "data_offset": 0, 00:18:19.126 "data_size": 0 00:18:19.126 } 00:18:19.126 ] 00:18:19.126 }' 00:18:19.126 10:43:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.126 10:43:45 -- common/autotest_common.sh@10 -- # set +x 00:18:19.692 10:43:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:19.951 [2024-07-24 10:43:46.495361] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:19.951 [2024-07-24 10:43:46.495460] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:19.951 10:43:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.209 [2024-07-24 10:43:46.743529] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.209 [2024-07-24 10:43:46.743655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.209 [2024-07-24 10:43:46.743687] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.209 [2024-07-24 10:43:46.743717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.209 [2024-07-24 10:43:46.743727] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.209 [2024-07-24 10:43:46.743747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.209 [2024-07-24 10:43:46.743755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.209 [2024-07-24 10:43:46.743783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.209 10:43:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:20.467 [2024-07-24 10:43:46.995841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.467 BaseBdev1 00:18:20.467 10:43:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:20.467 10:43:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:20.467 10:43:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:20.467 10:43:47 -- common/autotest_common.sh@889 -- # local i 00:18:20.467 10:43:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:20.467 10:43:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:20.467 10:43:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.725 10:43:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:20.984 [ 00:18:20.984 { 00:18:20.984 "name": "BaseBdev1", 00:18:20.984 "aliases": [ 00:18:20.984 "67a15556-0f5b-4dd3-b242-fac191e0d2eb" 00:18:20.984 ], 00:18:20.984 "product_name": "Malloc disk", 00:18:20.984 "block_size": 512, 00:18:20.984 "num_blocks": 65536, 00:18:20.984 "uuid": "67a15556-0f5b-4dd3-b242-fac191e0d2eb", 00:18:20.984 "assigned_rate_limits": { 00:18:20.984 "rw_ios_per_sec": 0, 00:18:20.984 "rw_mbytes_per_sec": 0, 00:18:20.984 "r_mbytes_per_sec": 0, 00:18:20.984 "w_mbytes_per_sec": 0 00:18:20.984 }, 00:18:20.984 "claimed": true, 00:18:20.984 "claim_type": "exclusive_write", 00:18:20.984 "zoned": false, 00:18:20.984 "supported_io_types": { 00:18:20.984 "read": true, 00:18:20.984 "write": true, 00:18:20.984 "unmap": true, 00:18:20.984 "write_zeroes": true, 00:18:20.984 "flush": true, 00:18:20.984 "reset": true, 00:18:20.984 "compare": false, 00:18:20.984 "compare_and_write": false, 00:18:20.984 "abort": true, 00:18:20.984 "nvme_admin": false, 00:18:20.984 "nvme_io": false 00:18:20.984 }, 00:18:20.984 "memory_domains": [ 00:18:20.984 { 00:18:20.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.984 "dma_device_type": 2 00:18:20.984 } 00:18:20.984 ], 00:18:20.984 "driver_specific": {} 00:18:20.984 } 00:18:20.984 ] 00:18:20.984 10:43:47 -- common/autotest_common.sh@895 -- # return 0 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.984 10:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.243 10:43:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.243 "name": "Existed_Raid", 00:18:21.243 "uuid": "907f00ea-4cca-4afb-999e-bb8165874fc8", 00:18:21.243 "strip_size_kb": 64, 00:18:21.243 "state": "configuring", 00:18:21.243 "raid_level": "concat", 00:18:21.243 "superblock": true, 00:18:21.243 "num_base_bdevs": 4, 00:18:21.243 "num_base_bdevs_discovered": 1, 00:18:21.243 "num_base_bdevs_operational": 4, 00:18:21.243 "base_bdevs_list": [ 00:18:21.243 { 00:18:21.243 "name": "BaseBdev1", 00:18:21.243 "uuid": "67a15556-0f5b-4dd3-b242-fac191e0d2eb", 00:18:21.243 "is_configured": true, 00:18:21.243 "data_offset": 2048, 00:18:21.243 "data_size": 63488 00:18:21.243 }, 00:18:21.243 { 00:18:21.243 "name": "BaseBdev2", 00:18:21.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.243 "is_configured": false, 00:18:21.243 "data_offset": 0, 00:18:21.243 "data_size": 0 00:18:21.243 }, 00:18:21.243 { 00:18:21.243 "name": "BaseBdev3", 00:18:21.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.243 "is_configured": false, 00:18:21.243 "data_offset": 0, 00:18:21.243 "data_size": 0 00:18:21.243 }, 00:18:21.243 { 00:18:21.243 "name": "BaseBdev4", 00:18:21.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.243 "is_configured": false, 00:18:21.243 "data_offset": 0, 00:18:21.243 "data_size": 0 00:18:21.243 } 00:18:21.243 ] 00:18:21.243 }' 00:18:21.243 10:43:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.243 10:43:47 -- common/autotest_common.sh@10 -- # set +x 00:18:21.810 10:43:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:22.069 [2024-07-24 10:43:48.608448] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:22.069 [2024-07-24 10:43:48.608547] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:22.069 10:43:48 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:22.069 10:43:48 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:22.327 10:43:48 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:22.586 BaseBdev1 00:18:22.586 10:43:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:22.586 10:43:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:22.586 10:43:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:22.586 10:43:49 -- common/autotest_common.sh@889 -- # local i 00:18:22.586 10:43:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:22.586 10:43:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:22.586 10:43:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.845 10:43:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:23.103 [ 00:18:23.103 { 00:18:23.103 "name": "BaseBdev1", 00:18:23.103 "aliases": [ 00:18:23.103 "a8b6c653-9c70-49d4-82f4-1d7e0f5ceaf2" 00:18:23.103 ], 00:18:23.103 "product_name": "Malloc disk", 00:18:23.103 "block_size": 512, 00:18:23.103 "num_blocks": 65536, 00:18:23.103 "uuid": "a8b6c653-9c70-49d4-82f4-1d7e0f5ceaf2", 00:18:23.103 "assigned_rate_limits": { 00:18:23.103 "rw_ios_per_sec": 0, 00:18:23.103 "rw_mbytes_per_sec": 0, 00:18:23.103 "r_mbytes_per_sec": 0, 00:18:23.103 "w_mbytes_per_sec": 0 00:18:23.103 }, 00:18:23.103 "claimed": false, 00:18:23.103 "zoned": false, 00:18:23.103 "supported_io_types": { 00:18:23.103 "read": true, 00:18:23.103 "write": true, 00:18:23.103 "unmap": true, 00:18:23.103 "write_zeroes": true, 00:18:23.103 "flush": true, 00:18:23.103 "reset": true, 00:18:23.103 "compare": false, 00:18:23.103 "compare_and_write": false, 00:18:23.103 "abort": true, 00:18:23.103 "nvme_admin": false, 00:18:23.103 "nvme_io": false 00:18:23.103 }, 00:18:23.103 "memory_domains": [ 00:18:23.103 { 00:18:23.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.103 "dma_device_type": 2 00:18:23.103 } 00:18:23.103 ], 00:18:23.103 "driver_specific": {} 00:18:23.103 } 00:18:23.103 ] 00:18:23.103 10:43:49 -- common/autotest_common.sh@895 -- # return 0 00:18:23.103 10:43:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:23.362 [2024-07-24 10:43:49.967509] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.362 [2024-07-24 10:43:49.970142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.362 [2024-07-24 10:43:49.970235] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.362 [2024-07-24 10:43:49.970260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.362 [2024-07-24 10:43:49.970296] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.362 [2024-07-24 10:43:49.970306] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.362 [2024-07-24 10:43:49.970325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.362 10:43:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.620 10:43:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.620 "name": "Existed_Raid", 00:18:23.620 "uuid": "e20171f6-fdc0-4580-8fff-0f865f8836f1", 00:18:23.620 "strip_size_kb": 64, 00:18:23.620 "state": "configuring", 00:18:23.620 "raid_level": "concat", 00:18:23.620 "superblock": true, 00:18:23.620 "num_base_bdevs": 4, 00:18:23.620 "num_base_bdevs_discovered": 1, 00:18:23.620 "num_base_bdevs_operational": 4, 00:18:23.620 "base_bdevs_list": [ 00:18:23.620 { 00:18:23.620 "name": "BaseBdev1", 00:18:23.620 "uuid": "a8b6c653-9c70-49d4-82f4-1d7e0f5ceaf2", 00:18:23.620 "is_configured": true, 00:18:23.620 "data_offset": 2048, 00:18:23.620 "data_size": 63488 00:18:23.620 }, 00:18:23.620 { 00:18:23.620 "name": "BaseBdev2", 00:18:23.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.620 "is_configured": false, 00:18:23.620 "data_offset": 0, 00:18:23.620 "data_size": 0 00:18:23.620 }, 00:18:23.620 { 00:18:23.620 "name": "BaseBdev3", 00:18:23.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.620 "is_configured": false, 00:18:23.620 "data_offset": 0, 00:18:23.620 "data_size": 0 00:18:23.620 }, 00:18:23.620 { 00:18:23.620 "name": "BaseBdev4", 00:18:23.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.620 "is_configured": false, 00:18:23.620 "data_offset": 0, 00:18:23.620 "data_size": 0 00:18:23.620 } 00:18:23.620 ] 00:18:23.620 }' 00:18:23.620 10:43:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.620 10:43:50 -- common/autotest_common.sh@10 -- # set +x 00:18:24.186 10:43:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:24.444 [2024-07-24 10:43:51.132118] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.703 BaseBdev2 00:18:24.703 10:43:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:24.703 10:43:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:24.703 10:43:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:24.703 10:43:51 -- common/autotest_common.sh@889 -- # local i 00:18:24.703 10:43:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:24.703 10:43:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:24.703 10:43:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.961 10:43:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:25.219 [ 00:18:25.219 { 00:18:25.219 "name": "BaseBdev2", 00:18:25.219 "aliases": [ 00:18:25.219 "12a0a871-d5f4-4ddf-bb39-c0f93a301aa4" 00:18:25.219 ], 00:18:25.219 "product_name": "Malloc disk", 00:18:25.219 "block_size": 512, 00:18:25.219 "num_blocks": 65536, 00:18:25.219 "uuid": "12a0a871-d5f4-4ddf-bb39-c0f93a301aa4", 00:18:25.219 "assigned_rate_limits": { 00:18:25.219 "rw_ios_per_sec": 0, 00:18:25.219 "rw_mbytes_per_sec": 0, 00:18:25.219 "r_mbytes_per_sec": 0, 00:18:25.219 "w_mbytes_per_sec": 0 00:18:25.219 }, 00:18:25.219 "claimed": true, 00:18:25.219 "claim_type": "exclusive_write", 00:18:25.219 "zoned": false, 00:18:25.219 "supported_io_types": { 00:18:25.219 "read": true, 00:18:25.219 "write": true, 00:18:25.219 "unmap": true, 00:18:25.219 "write_zeroes": true, 00:18:25.219 "flush": true, 00:18:25.219 "reset": true, 00:18:25.219 "compare": false, 00:18:25.219 "compare_and_write": false, 00:18:25.219 "abort": true, 00:18:25.219 "nvme_admin": false, 00:18:25.219 "nvme_io": false 00:18:25.219 }, 00:18:25.219 "memory_domains": [ 00:18:25.219 { 00:18:25.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.219 "dma_device_type": 2 00:18:25.219 } 00:18:25.219 ], 00:18:25.219 "driver_specific": {} 00:18:25.219 } 00:18:25.219 ] 00:18:25.219 10:43:51 -- common/autotest_common.sh@895 -- # return 0 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.219 10:43:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.477 10:43:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.477 "name": "Existed_Raid", 00:18:25.477 "uuid": "e20171f6-fdc0-4580-8fff-0f865f8836f1", 00:18:25.477 "strip_size_kb": 64, 00:18:25.477 "state": "configuring", 00:18:25.477 "raid_level": "concat", 00:18:25.477 "superblock": true, 00:18:25.477 "num_base_bdevs": 4, 00:18:25.477 "num_base_bdevs_discovered": 2, 00:18:25.477 "num_base_bdevs_operational": 4, 00:18:25.477 "base_bdevs_list": [ 00:18:25.477 { 00:18:25.477 "name": "BaseBdev1", 00:18:25.477 "uuid": "a8b6c653-9c70-49d4-82f4-1d7e0f5ceaf2", 00:18:25.477 "is_configured": true, 00:18:25.477 "data_offset": 2048, 00:18:25.477 "data_size": 63488 00:18:25.477 }, 00:18:25.477 { 00:18:25.477 "name": "BaseBdev2", 00:18:25.477 "uuid": "12a0a871-d5f4-4ddf-bb39-c0f93a301aa4", 00:18:25.477 "is_configured": true, 00:18:25.477 "data_offset": 2048, 00:18:25.477 "data_size": 63488 00:18:25.477 }, 00:18:25.477 { 00:18:25.477 "name": "BaseBdev3", 00:18:25.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.477 "is_configured": false, 00:18:25.478 "data_offset": 0, 00:18:25.478 "data_size": 0 00:18:25.478 }, 00:18:25.478 { 00:18:25.478 "name": "BaseBdev4", 00:18:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.478 "is_configured": false, 00:18:25.478 "data_offset": 0, 00:18:25.478 "data_size": 0 00:18:25.478 } 00:18:25.478 ] 00:18:25.478 }' 00:18:25.478 10:43:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.478 10:43:51 -- common/autotest_common.sh@10 -- # set +x 00:18:26.044 10:43:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:26.302 [2024-07-24 10:43:52.842016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:26.302 BaseBdev3 00:18:26.302 10:43:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:26.302 10:43:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:26.302 10:43:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.302 10:43:52 -- common/autotest_common.sh@889 -- # local i 00:18:26.302 10:43:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.302 10:43:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.302 10:43:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.560 10:43:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:26.818 [ 00:18:26.818 { 00:18:26.818 "name": "BaseBdev3", 00:18:26.818 "aliases": [ 00:18:26.818 "b8cf6392-6547-4a65-afb3-26c78d24845b" 00:18:26.818 ], 00:18:26.818 "product_name": "Malloc disk", 00:18:26.818 "block_size": 512, 00:18:26.818 "num_blocks": 65536, 00:18:26.818 "uuid": "b8cf6392-6547-4a65-afb3-26c78d24845b", 00:18:26.818 "assigned_rate_limits": { 00:18:26.818 "rw_ios_per_sec": 0, 00:18:26.818 "rw_mbytes_per_sec": 0, 00:18:26.818 "r_mbytes_per_sec": 0, 00:18:26.818 "w_mbytes_per_sec": 0 00:18:26.818 }, 00:18:26.818 "claimed": true, 00:18:26.818 "claim_type": "exclusive_write", 00:18:26.818 "zoned": false, 00:18:26.818 "supported_io_types": { 00:18:26.818 "read": true, 00:18:26.818 "write": true, 00:18:26.818 "unmap": true, 00:18:26.818 "write_zeroes": true, 00:18:26.818 "flush": true, 00:18:26.818 "reset": true, 00:18:26.818 "compare": false, 00:18:26.818 "compare_and_write": false, 00:18:26.818 "abort": true, 00:18:26.818 "nvme_admin": false, 00:18:26.818 "nvme_io": false 00:18:26.818 }, 00:18:26.818 "memory_domains": [ 00:18:26.818 { 00:18:26.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.818 "dma_device_type": 2 00:18:26.818 } 00:18:26.818 ], 00:18:26.818 "driver_specific": {} 00:18:26.818 } 00:18:26.818 ] 00:18:26.818 10:43:53 -- common/autotest_common.sh@895 -- # return 0 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.818 10:43:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.076 10:43:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.076 "name": "Existed_Raid", 00:18:27.076 "uuid": "e20171f6-fdc0-4580-8fff-0f865f8836f1", 00:18:27.076 "strip_size_kb": 64, 00:18:27.076 "state": "configuring", 00:18:27.076 "raid_level": "concat", 00:18:27.076 "superblock": true, 00:18:27.076 "num_base_bdevs": 4, 00:18:27.076 "num_base_bdevs_discovered": 3, 00:18:27.076 "num_base_bdevs_operational": 4, 00:18:27.076 "base_bdevs_list": [ 00:18:27.076 { 00:18:27.076 "name": "BaseBdev1", 00:18:27.076 "uuid": "a8b6c653-9c70-49d4-82f4-1d7e0f5ceaf2", 00:18:27.076 "is_configured": true, 00:18:27.076 "data_offset": 2048, 00:18:27.076 "data_size": 63488 00:18:27.076 }, 00:18:27.076 { 00:18:27.076 "name": "BaseBdev2", 00:18:27.076 "uuid": "12a0a871-d5f4-4ddf-bb39-c0f93a301aa4", 00:18:27.076 "is_configured": true, 00:18:27.076 "data_offset": 2048, 00:18:27.076 "data_size": 63488 00:18:27.076 }, 00:18:27.076 { 00:18:27.076 "name": "BaseBdev3", 00:18:27.076 "uuid": "b8cf6392-6547-4a65-afb3-26c78d24845b", 00:18:27.076 "is_configured": true, 00:18:27.076 "data_offset": 2048, 00:18:27.076 "data_size": 63488 00:18:27.076 }, 00:18:27.076 { 00:18:27.076 "name": "BaseBdev4", 00:18:27.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.076 "is_configured": false, 00:18:27.076 "data_offset": 0, 00:18:27.076 "data_size": 0 00:18:27.076 } 00:18:27.076 ] 00:18:27.076 }' 00:18:27.076 10:43:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.076 10:43:53 -- common/autotest_common.sh@10 -- # set +x 00:18:27.643 10:43:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:27.901 [2024-07-24 10:43:54.551747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:27.901 [2024-07-24 10:43:54.552042] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:18:27.901 [2024-07-24 10:43:54.552060] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:27.901 [2024-07-24 10:43:54.552207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:18:27.901 [2024-07-24 10:43:54.552639] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:18:27.901 [2024-07-24 10:43:54.552664] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:18:27.901 [2024-07-24 10:43:54.552843] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.901 BaseBdev4 00:18:27.901 10:43:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:27.901 10:43:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:27.901 10:43:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:27.901 10:43:54 -- common/autotest_common.sh@889 -- # local i 00:18:27.901 10:43:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:27.901 10:43:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:27.901 10:43:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.175 10:43:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:28.467 [ 00:18:28.467 { 00:18:28.467 "name": "BaseBdev4", 00:18:28.467 "aliases": [ 00:18:28.467 "09d76f6e-8b72-49cb-b4b3-ab303c71316b" 00:18:28.467 ], 00:18:28.467 "product_name": "Malloc disk", 00:18:28.467 "block_size": 512, 00:18:28.467 "num_blocks": 65536, 00:18:28.467 "uuid": "09d76f6e-8b72-49cb-b4b3-ab303c71316b", 00:18:28.467 "assigned_rate_limits": { 00:18:28.467 "rw_ios_per_sec": 0, 00:18:28.467 "rw_mbytes_per_sec": 0, 00:18:28.467 "r_mbytes_per_sec": 0, 00:18:28.467 "w_mbytes_per_sec": 0 00:18:28.467 }, 00:18:28.467 "claimed": true, 00:18:28.467 "claim_type": "exclusive_write", 00:18:28.467 "zoned": false, 00:18:28.467 "supported_io_types": { 00:18:28.467 "read": true, 00:18:28.467 "write": true, 00:18:28.467 "unmap": true, 00:18:28.467 "write_zeroes": true, 00:18:28.467 "flush": true, 00:18:28.467 "reset": true, 00:18:28.467 "compare": false, 00:18:28.467 "compare_and_write": false, 00:18:28.467 "abort": true, 00:18:28.467 "nvme_admin": false, 00:18:28.467 "nvme_io": false 00:18:28.467 }, 00:18:28.467 "memory_domains": [ 00:18:28.467 { 00:18:28.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.467 "dma_device_type": 2 00:18:28.467 } 00:18:28.467 ], 00:18:28.467 "driver_specific": {} 00:18:28.467 } 00:18:28.467 ] 00:18:28.467 10:43:55 -- common/autotest_common.sh@895 -- # return 0 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.467 10:43:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.725 10:43:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.725 "name": "Existed_Raid", 00:18:28.725 "uuid": "e20171f6-fdc0-4580-8fff-0f865f8836f1", 00:18:28.725 "strip_size_kb": 64, 00:18:28.725 "state": "online", 00:18:28.725 "raid_level": "concat", 00:18:28.725 "superblock": true, 00:18:28.725 "num_base_bdevs": 4, 00:18:28.725 "num_base_bdevs_discovered": 4, 00:18:28.725 "num_base_bdevs_operational": 4, 00:18:28.725 "base_bdevs_list": [ 00:18:28.725 { 00:18:28.725 "name": "BaseBdev1", 00:18:28.725 "uuid": "a8b6c653-9c70-49d4-82f4-1d7e0f5ceaf2", 00:18:28.725 "is_configured": true, 00:18:28.725 "data_offset": 2048, 00:18:28.725 "data_size": 63488 00:18:28.725 }, 00:18:28.725 { 00:18:28.725 "name": "BaseBdev2", 00:18:28.725 "uuid": "12a0a871-d5f4-4ddf-bb39-c0f93a301aa4", 00:18:28.725 "is_configured": true, 00:18:28.725 "data_offset": 2048, 00:18:28.725 "data_size": 63488 00:18:28.725 }, 00:18:28.725 { 00:18:28.725 "name": "BaseBdev3", 00:18:28.725 "uuid": "b8cf6392-6547-4a65-afb3-26c78d24845b", 00:18:28.725 "is_configured": true, 00:18:28.725 "data_offset": 2048, 00:18:28.725 "data_size": 63488 00:18:28.725 }, 00:18:28.725 { 00:18:28.725 "name": "BaseBdev4", 00:18:28.725 "uuid": "09d76f6e-8b72-49cb-b4b3-ab303c71316b", 00:18:28.725 "is_configured": true, 00:18:28.725 "data_offset": 2048, 00:18:28.725 "data_size": 63488 00:18:28.725 } 00:18:28.725 ] 00:18:28.725 }' 00:18:28.725 10:43:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.725 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:18:29.290 10:43:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:29.546 [2024-07-24 10:43:56.232381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:29.546 [2024-07-24 10:43:56.232438] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.546 [2024-07-24 10:43:56.232527] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.803 10:43:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.061 10:43:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.061 "name": "Existed_Raid", 00:18:30.061 "uuid": "e20171f6-fdc0-4580-8fff-0f865f8836f1", 00:18:30.061 "strip_size_kb": 64, 00:18:30.061 "state": "offline", 00:18:30.061 "raid_level": "concat", 00:18:30.061 "superblock": true, 00:18:30.061 "num_base_bdevs": 4, 00:18:30.061 "num_base_bdevs_discovered": 3, 00:18:30.061 "num_base_bdevs_operational": 3, 00:18:30.061 "base_bdevs_list": [ 00:18:30.061 { 00:18:30.061 "name": null, 00:18:30.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.061 "is_configured": false, 00:18:30.061 "data_offset": 2048, 00:18:30.061 "data_size": 63488 00:18:30.061 }, 00:18:30.061 { 00:18:30.061 "name": "BaseBdev2", 00:18:30.061 "uuid": "12a0a871-d5f4-4ddf-bb39-c0f93a301aa4", 00:18:30.061 "is_configured": true, 00:18:30.061 "data_offset": 2048, 00:18:30.061 "data_size": 63488 00:18:30.061 }, 00:18:30.061 { 00:18:30.061 "name": "BaseBdev3", 00:18:30.061 "uuid": "b8cf6392-6547-4a65-afb3-26c78d24845b", 00:18:30.061 "is_configured": true, 00:18:30.061 "data_offset": 2048, 00:18:30.061 "data_size": 63488 00:18:30.061 }, 00:18:30.061 { 00:18:30.061 "name": "BaseBdev4", 00:18:30.061 "uuid": "09d76f6e-8b72-49cb-b4b3-ab303c71316b", 00:18:30.061 "is_configured": true, 00:18:30.061 "data_offset": 2048, 00:18:30.061 "data_size": 63488 00:18:30.061 } 00:18:30.061 ] 00:18:30.061 }' 00:18:30.061 10:43:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.061 10:43:56 -- common/autotest_common.sh@10 -- # set +x 00:18:30.628 10:43:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:30.628 10:43:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:30.628 10:43:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.628 10:43:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:30.887 10:43:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:30.887 10:43:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:30.887 10:43:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:31.145 [2024-07-24 10:43:57.717697] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.145 10:43:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:31.145 10:43:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.145 10:43:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.145 10:43:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:31.404 10:43:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:31.404 10:43:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.404 10:43:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:31.663 [2024-07-24 10:43:58.256617] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:31.663 10:43:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:31.663 10:43:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.663 10:43:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.663 10:43:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:31.922 10:43:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:31.922 10:43:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.922 10:43:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:32.183 [2024-07-24 10:43:58.791099] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:32.183 [2024-07-24 10:43:58.791230] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:18:32.183 10:43:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.183 10:43:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.183 10:43:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.183 10:43:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:32.443 10:43:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:32.443 10:43:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:32.443 10:43:59 -- bdev/bdev_raid.sh@287 -- # killprocess 130954 00:18:32.443 10:43:59 -- common/autotest_common.sh@926 -- # '[' -z 130954 ']' 00:18:32.443 10:43:59 -- common/autotest_common.sh@930 -- # kill -0 130954 00:18:32.443 10:43:59 -- common/autotest_common.sh@931 -- # uname 00:18:32.443 10:43:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:32.443 10:43:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130954 00:18:32.443 10:43:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:32.443 10:43:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:32.443 10:43:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130954' 00:18:32.443 killing process with pid 130954 00:18:32.443 10:43:59 -- common/autotest_common.sh@945 -- # kill 130954 00:18:32.443 [2024-07-24 10:43:59.130825] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:32.443 [2024-07-24 10:43:59.130922] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:32.443 10:43:59 -- common/autotest_common.sh@950 -- # wait 130954 00:18:32.702 10:43:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:32.702 00:18:32.702 real 0m15.270s 00:18:32.702 user 0m28.015s 00:18:32.702 sys 0m2.142s 00:18:32.702 10:43:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.702 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:32.702 ************************************ 00:18:32.702 END TEST raid_state_function_test_sb 00:18:32.702 ************************************ 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:32.971 10:43:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:32.971 10:43:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:32.971 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:32.971 ************************************ 00:18:32.971 START TEST raid_superblock_test 00:18:32.971 ************************************ 00:18:32.971 10:43:59 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=131410 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131410 /var/tmp/spdk-raid.sock 00:18:32.971 10:43:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:32.971 10:43:59 -- common/autotest_common.sh@819 -- # '[' -z 131410 ']' 00:18:32.971 10:43:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:32.971 10:43:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.971 10:43:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:32.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:32.971 10:43:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.971 10:43:59 -- common/autotest_common.sh@10 -- # set +x 00:18:32.971 [2024-07-24 10:43:59.496988] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:32.971 [2024-07-24 10:43:59.497253] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131410 ] 00:18:32.971 [2024-07-24 10:43:59.648362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.243 [2024-07-24 10:43:59.773299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.243 [2024-07-24 10:43:59.853499] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:33.809 10:44:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.809 10:44:00 -- common/autotest_common.sh@852 -- # return 0 00:18:33.809 10:44:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:33.809 10:44:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:33.809 10:44:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:34.067 10:44:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:34.067 10:44:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:34.067 10:44:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.067 10:44:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.067 10:44:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.067 10:44:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:34.325 malloc1 00:18:34.325 10:44:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.583 [2024-07-24 10:44:01.066318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.583 [2024-07-24 10:44:01.066508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.583 [2024-07-24 10:44:01.066567] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:18:34.583 [2024-07-24 10:44:01.066644] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.583 [2024-07-24 10:44:01.069808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.583 [2024-07-24 10:44:01.069867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.583 pt1 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:34.583 10:44:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:34.840 malloc2 00:18:34.840 10:44:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.097 [2024-07-24 10:44:01.576901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.097 [2024-07-24 10:44:01.577029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.097 [2024-07-24 10:44:01.577076] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:18:35.097 [2024-07-24 10:44:01.577127] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.097 [2024-07-24 10:44:01.579866] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.097 [2024-07-24 10:44:01.579944] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.097 pt2 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.097 10:44:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:35.354 malloc3 00:18:35.354 10:44:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:35.610 [2024-07-24 10:44:02.082155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:35.610 [2024-07-24 10:44:02.082289] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.610 [2024-07-24 10:44:02.082361] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.610 [2024-07-24 10:44:02.082414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.611 [2024-07-24 10:44:02.085322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.611 [2024-07-24 10:44:02.085382] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:35.611 pt3 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.611 10:44:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:35.867 malloc4 00:18:35.867 10:44:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:35.867 [2024-07-24 10:44:02.553385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:35.867 [2024-07-24 10:44:02.553582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.867 [2024-07-24 10:44:02.553641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.867 [2024-07-24 10:44:02.553710] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.123 [2024-07-24 10:44:02.556749] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.123 [2024-07-24 10:44:02.556829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:36.123 pt4 00:18:36.123 10:44:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.123 10:44:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.123 10:44:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:36.379 [2024-07-24 10:44:02.881856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.379 [2024-07-24 10:44:02.884636] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.379 [2024-07-24 10:44:02.884739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:36.379 [2024-07-24 10:44:02.884810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:36.379 [2024-07-24 10:44:02.885158] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:18:36.379 [2024-07-24 10:44:02.885184] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:36.379 [2024-07-24 10:44:02.885393] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:18:36.379 [2024-07-24 10:44:02.885880] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:18:36.379 [2024-07-24 10:44:02.885909] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:18:36.379 [2024-07-24 10:44:02.886200] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.379 10:44:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.637 10:44:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.637 "name": "raid_bdev1", 00:18:36.637 "uuid": "3f90814b-06ee-4f66-9490-59627e553822", 00:18:36.637 "strip_size_kb": 64, 00:18:36.637 "state": "online", 00:18:36.637 "raid_level": "concat", 00:18:36.637 "superblock": true, 00:18:36.637 "num_base_bdevs": 4, 00:18:36.637 "num_base_bdevs_discovered": 4, 00:18:36.637 "num_base_bdevs_operational": 4, 00:18:36.637 "base_bdevs_list": [ 00:18:36.637 { 00:18:36.637 "name": "pt1", 00:18:36.637 "uuid": "3c6c90c1-0705-510c-92d7-70b5fc4c2c34", 00:18:36.637 "is_configured": true, 00:18:36.637 "data_offset": 2048, 00:18:36.637 "data_size": 63488 00:18:36.637 }, 00:18:36.637 { 00:18:36.637 "name": "pt2", 00:18:36.637 "uuid": "d06528d4-cd87-563a-963f-29d74b5dad0d", 00:18:36.637 "is_configured": true, 00:18:36.637 "data_offset": 2048, 00:18:36.637 "data_size": 63488 00:18:36.637 }, 00:18:36.637 { 00:18:36.637 "name": "pt3", 00:18:36.637 "uuid": "869a77df-1728-5f27-b3c6-577195d234bc", 00:18:36.637 "is_configured": true, 00:18:36.637 "data_offset": 2048, 00:18:36.637 "data_size": 63488 00:18:36.637 }, 00:18:36.637 { 00:18:36.637 "name": "pt4", 00:18:36.637 "uuid": "c0779430-1c97-506a-9225-87ecd14acaaf", 00:18:36.637 "is_configured": true, 00:18:36.637 "data_offset": 2048, 00:18:36.637 "data_size": 63488 00:18:36.637 } 00:18:36.637 ] 00:18:36.637 }' 00:18:36.637 10:44:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.637 10:44:03 -- common/autotest_common.sh@10 -- # set +x 00:18:37.581 10:44:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:37.581 10:44:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:37.581 [2024-07-24 10:44:04.194773] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.581 10:44:04 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=3f90814b-06ee-4f66-9490-59627e553822 00:18:37.581 10:44:04 -- bdev/bdev_raid.sh@380 -- # '[' -z 3f90814b-06ee-4f66-9490-59627e553822 ']' 00:18:37.581 10:44:04 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:37.839 [2024-07-24 10:44:04.474404] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.839 [2024-07-24 10:44:04.474475] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.839 [2024-07-24 10:44:04.474621] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.839 [2024-07-24 10:44:04.474735] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.839 [2024-07-24 10:44:04.474751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:18:37.839 10:44:04 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.839 10:44:04 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:38.097 10:44:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:38.097 10:44:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:38.097 10:44:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.097 10:44:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:38.355 10:44:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.355 10:44:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:38.613 10:44:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.613 10:44:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:38.870 10:44:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.870 10:44:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:39.128 10:44:05 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:39.128 10:44:05 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:39.386 10:44:05 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:39.386 10:44:05 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.386 10:44:05 -- common/autotest_common.sh@640 -- # local es=0 00:18:39.386 10:44:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.386 10:44:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.386 10:44:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.386 10:44:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.386 10:44:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.386 10:44:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.386 10:44:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:39.386 10:44:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:39.386 10:44:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:39.386 10:44:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:39.644 [2024-07-24 10:44:06.206728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:39.644 [2024-07-24 10:44:06.209154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:39.644 [2024-07-24 10:44:06.209215] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:39.644 [2024-07-24 10:44:06.209255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:39.644 [2024-07-24 10:44:06.209320] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:39.644 [2024-07-24 10:44:06.209451] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:39.644 [2024-07-24 10:44:06.209494] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:39.644 [2024-07-24 10:44:06.209570] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:39.644 [2024-07-24 10:44:06.209621] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.644 [2024-07-24 10:44:06.209635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:18:39.644 request: 00:18:39.644 { 00:18:39.644 "name": "raid_bdev1", 00:18:39.644 "raid_level": "concat", 00:18:39.644 "base_bdevs": [ 00:18:39.644 "malloc1", 00:18:39.644 "malloc2", 00:18:39.644 "malloc3", 00:18:39.644 "malloc4" 00:18:39.644 ], 00:18:39.644 "superblock": false, 00:18:39.644 "strip_size_kb": 64, 00:18:39.644 "method": "bdev_raid_create", 00:18:39.644 "req_id": 1 00:18:39.644 } 00:18:39.644 Got JSON-RPC error response 00:18:39.644 response: 00:18:39.644 { 00:18:39.644 "code": -17, 00:18:39.644 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:39.644 } 00:18:39.644 10:44:06 -- common/autotest_common.sh@643 -- # es=1 00:18:39.644 10:44:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:39.644 10:44:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:39.644 10:44:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:39.644 10:44:06 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:39.644 10:44:06 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.902 10:44:06 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:39.902 10:44:06 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:39.902 10:44:06 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.160 [2024-07-24 10:44:06.670735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.160 [2024-07-24 10:44:06.670894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.160 [2024-07-24 10:44:06.670942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:40.160 [2024-07-24 10:44:06.670975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.160 [2024-07-24 10:44:06.673610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.160 [2024-07-24 10:44:06.673689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.160 [2024-07-24 10:44:06.673797] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:40.160 [2024-07-24 10:44:06.673881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.160 pt1 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.160 10:44:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.417 10:44:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.417 "name": "raid_bdev1", 00:18:40.417 "uuid": "3f90814b-06ee-4f66-9490-59627e553822", 00:18:40.417 "strip_size_kb": 64, 00:18:40.417 "state": "configuring", 00:18:40.417 "raid_level": "concat", 00:18:40.417 "superblock": true, 00:18:40.417 "num_base_bdevs": 4, 00:18:40.417 "num_base_bdevs_discovered": 1, 00:18:40.417 "num_base_bdevs_operational": 4, 00:18:40.417 "base_bdevs_list": [ 00:18:40.417 { 00:18:40.417 "name": "pt1", 00:18:40.417 "uuid": "3c6c90c1-0705-510c-92d7-70b5fc4c2c34", 00:18:40.417 "is_configured": true, 00:18:40.417 "data_offset": 2048, 00:18:40.417 "data_size": 63488 00:18:40.417 }, 00:18:40.417 { 00:18:40.417 "name": null, 00:18:40.417 "uuid": "d06528d4-cd87-563a-963f-29d74b5dad0d", 00:18:40.417 "is_configured": false, 00:18:40.417 "data_offset": 2048, 00:18:40.417 "data_size": 63488 00:18:40.417 }, 00:18:40.417 { 00:18:40.417 "name": null, 00:18:40.417 "uuid": "869a77df-1728-5f27-b3c6-577195d234bc", 00:18:40.417 "is_configured": false, 00:18:40.417 "data_offset": 2048, 00:18:40.417 "data_size": 63488 00:18:40.417 }, 00:18:40.417 { 00:18:40.417 "name": null, 00:18:40.417 "uuid": "c0779430-1c97-506a-9225-87ecd14acaaf", 00:18:40.417 "is_configured": false, 00:18:40.417 "data_offset": 2048, 00:18:40.417 "data_size": 63488 00:18:40.417 } 00:18:40.417 ] 00:18:40.417 }' 00:18:40.417 10:44:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.417 10:44:06 -- common/autotest_common.sh@10 -- # set +x 00:18:40.983 10:44:07 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:40.983 10:44:07 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.241 [2024-07-24 10:44:07.806925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.241 [2024-07-24 10:44:07.807083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.241 [2024-07-24 10:44:07.807136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:41.241 [2024-07-24 10:44:07.807161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.241 [2024-07-24 10:44:07.807709] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.241 [2024-07-24 10:44:07.807772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.241 [2024-07-24 10:44:07.807877] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:41.241 [2024-07-24 10:44:07.807905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.241 pt2 00:18:41.241 10:44:07 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:41.499 [2024-07-24 10:44:08.035097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.499 10:44:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.757 10:44:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.757 "name": "raid_bdev1", 00:18:41.757 "uuid": "3f90814b-06ee-4f66-9490-59627e553822", 00:18:41.757 "strip_size_kb": 64, 00:18:41.758 "state": "configuring", 00:18:41.758 "raid_level": "concat", 00:18:41.758 "superblock": true, 00:18:41.758 "num_base_bdevs": 4, 00:18:41.758 "num_base_bdevs_discovered": 1, 00:18:41.758 "num_base_bdevs_operational": 4, 00:18:41.758 "base_bdevs_list": [ 00:18:41.758 { 00:18:41.758 "name": "pt1", 00:18:41.758 "uuid": "3c6c90c1-0705-510c-92d7-70b5fc4c2c34", 00:18:41.758 "is_configured": true, 00:18:41.758 "data_offset": 2048, 00:18:41.758 "data_size": 63488 00:18:41.758 }, 00:18:41.758 { 00:18:41.758 "name": null, 00:18:41.758 "uuid": "d06528d4-cd87-563a-963f-29d74b5dad0d", 00:18:41.758 "is_configured": false, 00:18:41.758 "data_offset": 2048, 00:18:41.758 "data_size": 63488 00:18:41.758 }, 00:18:41.758 { 00:18:41.758 "name": null, 00:18:41.758 "uuid": "869a77df-1728-5f27-b3c6-577195d234bc", 00:18:41.758 "is_configured": false, 00:18:41.758 "data_offset": 2048, 00:18:41.758 "data_size": 63488 00:18:41.758 }, 00:18:41.758 { 00:18:41.758 "name": null, 00:18:41.758 "uuid": "c0779430-1c97-506a-9225-87ecd14acaaf", 00:18:41.758 "is_configured": false, 00:18:41.758 "data_offset": 2048, 00:18:41.758 "data_size": 63488 00:18:41.758 } 00:18:41.758 ] 00:18:41.758 }' 00:18:41.758 10:44:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.758 10:44:08 -- common/autotest_common.sh@10 -- # set +x 00:18:42.325 10:44:08 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:42.325 10:44:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.325 10:44:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.582 [2024-07-24 10:44:09.195335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.582 [2024-07-24 10:44:09.195502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.582 [2024-07-24 10:44:09.195575] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:42.582 [2024-07-24 10:44:09.195608] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.582 [2024-07-24 10:44:09.196261] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.583 [2024-07-24 10:44:09.196318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.583 [2024-07-24 10:44:09.196423] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:42.583 [2024-07-24 10:44:09.196451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.583 pt2 00:18:42.583 10:44:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.583 10:44:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.583 10:44:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:42.841 [2024-07-24 10:44:09.419392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:42.841 [2024-07-24 10:44:09.419579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.841 [2024-07-24 10:44:09.419642] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:42.841 [2024-07-24 10:44:09.419684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.841 [2024-07-24 10:44:09.420209] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.841 [2024-07-24 10:44:09.420263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:42.841 [2024-07-24 10:44:09.420359] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:42.841 [2024-07-24 10:44:09.420386] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.841 pt3 00:18:42.841 10:44:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.841 10:44:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.841 10:44:09 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:43.100 [2024-07-24 10:44:09.651419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:43.100 [2024-07-24 10:44:09.651601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.100 [2024-07-24 10:44:09.651652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:43.100 [2024-07-24 10:44:09.651685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.100 [2024-07-24 10:44:09.652198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.100 [2024-07-24 10:44:09.652273] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:43.100 [2024-07-24 10:44:09.652370] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:43.100 [2024-07-24 10:44:09.652398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:43.100 [2024-07-24 10:44:09.652567] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:43.100 [2024-07-24 10:44:09.652582] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:43.100 [2024-07-24 10:44:09.652683] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:18:43.100 [2024-07-24 10:44:09.653038] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:43.100 [2024-07-24 10:44:09.653052] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:43.100 [2024-07-24 10:44:09.653160] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.100 pt4 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.100 10:44:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.359 10:44:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.359 "name": "raid_bdev1", 00:18:43.359 "uuid": "3f90814b-06ee-4f66-9490-59627e553822", 00:18:43.359 "strip_size_kb": 64, 00:18:43.359 "state": "online", 00:18:43.359 "raid_level": "concat", 00:18:43.359 "superblock": true, 00:18:43.359 "num_base_bdevs": 4, 00:18:43.359 "num_base_bdevs_discovered": 4, 00:18:43.359 "num_base_bdevs_operational": 4, 00:18:43.359 "base_bdevs_list": [ 00:18:43.359 { 00:18:43.359 "name": "pt1", 00:18:43.359 "uuid": "3c6c90c1-0705-510c-92d7-70b5fc4c2c34", 00:18:43.359 "is_configured": true, 00:18:43.359 "data_offset": 2048, 00:18:43.359 "data_size": 63488 00:18:43.359 }, 00:18:43.359 { 00:18:43.359 "name": "pt2", 00:18:43.359 "uuid": "d06528d4-cd87-563a-963f-29d74b5dad0d", 00:18:43.359 "is_configured": true, 00:18:43.359 "data_offset": 2048, 00:18:43.359 "data_size": 63488 00:18:43.359 }, 00:18:43.359 { 00:18:43.359 "name": "pt3", 00:18:43.359 "uuid": "869a77df-1728-5f27-b3c6-577195d234bc", 00:18:43.359 "is_configured": true, 00:18:43.359 "data_offset": 2048, 00:18:43.359 "data_size": 63488 00:18:43.359 }, 00:18:43.359 { 00:18:43.359 "name": "pt4", 00:18:43.359 "uuid": "c0779430-1c97-506a-9225-87ecd14acaaf", 00:18:43.359 "is_configured": true, 00:18:43.359 "data_offset": 2048, 00:18:43.359 "data_size": 63488 00:18:43.359 } 00:18:43.359 ] 00:18:43.359 }' 00:18:43.359 10:44:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.359 10:44:09 -- common/autotest_common.sh@10 -- # set +x 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:44.294 [2024-07-24 10:44:10.852339] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@430 -- # '[' 3f90814b-06ee-4f66-9490-59627e553822 '!=' 3f90814b-06ee-4f66-9490-59627e553822 ']' 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:44.294 10:44:10 -- bdev/bdev_raid.sh@511 -- # killprocess 131410 00:18:44.294 10:44:10 -- common/autotest_common.sh@926 -- # '[' -z 131410 ']' 00:18:44.294 10:44:10 -- common/autotest_common.sh@930 -- # kill -0 131410 00:18:44.294 10:44:10 -- common/autotest_common.sh@931 -- # uname 00:18:44.294 10:44:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.294 10:44:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131410 00:18:44.294 10:44:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:44.294 10:44:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:44.294 10:44:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131410' 00:18:44.294 killing process with pid 131410 00:18:44.294 10:44:10 -- common/autotest_common.sh@945 -- # kill 131410 00:18:44.294 [2024-07-24 10:44:10.898761] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.294 [2024-07-24 10:44:10.898876] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.294 [2024-07-24 10:44:10.898962] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.294 [2024-07-24 10:44:10.898981] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:44.294 10:44:10 -- common/autotest_common.sh@950 -- # wait 131410 00:18:44.294 [2024-07-24 10:44:10.947715] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.552 10:44:11 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:44.552 00:18:44.552 real 0m11.761s 00:18:44.552 user 0m21.259s 00:18:44.552 sys 0m1.628s 00:18:44.552 10:44:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.552 10:44:11 -- common/autotest_common.sh@10 -- # set +x 00:18:44.552 ************************************ 00:18:44.552 END TEST raid_superblock_test 00:18:44.552 ************************************ 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:44.810 10:44:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:44.810 10:44:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.810 10:44:11 -- common/autotest_common.sh@10 -- # set +x 00:18:44.810 ************************************ 00:18:44.810 START TEST raid_state_function_test 00:18:44.810 ************************************ 00:18:44.810 10:44:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=131731 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:44.810 Process raid pid: 131731 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 131731' 00:18:44.810 10:44:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 131731 /var/tmp/spdk-raid.sock 00:18:44.810 10:44:11 -- common/autotest_common.sh@819 -- # '[' -z 131731 ']' 00:18:44.810 10:44:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:44.810 10:44:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:44.810 10:44:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:44.810 10:44:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.810 10:44:11 -- common/autotest_common.sh@10 -- # set +x 00:18:44.810 [2024-07-24 10:44:11.317104] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:44.810 [2024-07-24 10:44:11.317330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.811 [2024-07-24 10:44:11.469109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.069 [2024-07-24 10:44:11.595917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.069 [2024-07-24 10:44:11.676528] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:46.003 10:44:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.003 10:44:12 -- common/autotest_common.sh@852 -- # return 0 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:46.003 [2024-07-24 10:44:12.585327] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:46.003 [2024-07-24 10:44:12.585443] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:46.003 [2024-07-24 10:44:12.585460] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:46.003 [2024-07-24 10:44:12.585482] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:46.003 [2024-07-24 10:44:12.585491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:46.003 [2024-07-24 10:44:12.585551] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:46.003 [2024-07-24 10:44:12.585561] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:46.003 [2024-07-24 10:44:12.585592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.003 10:44:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.262 10:44:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.262 "name": "Existed_Raid", 00:18:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.262 "strip_size_kb": 0, 00:18:46.262 "state": "configuring", 00:18:46.262 "raid_level": "raid1", 00:18:46.262 "superblock": false, 00:18:46.262 "num_base_bdevs": 4, 00:18:46.262 "num_base_bdevs_discovered": 0, 00:18:46.262 "num_base_bdevs_operational": 4, 00:18:46.262 "base_bdevs_list": [ 00:18:46.262 { 00:18:46.262 "name": "BaseBdev1", 00:18:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.262 "is_configured": false, 00:18:46.262 "data_offset": 0, 00:18:46.262 "data_size": 0 00:18:46.262 }, 00:18:46.262 { 00:18:46.262 "name": "BaseBdev2", 00:18:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.262 "is_configured": false, 00:18:46.262 "data_offset": 0, 00:18:46.262 "data_size": 0 00:18:46.262 }, 00:18:46.262 { 00:18:46.262 "name": "BaseBdev3", 00:18:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.262 "is_configured": false, 00:18:46.262 "data_offset": 0, 00:18:46.262 "data_size": 0 00:18:46.262 }, 00:18:46.262 { 00:18:46.262 "name": "BaseBdev4", 00:18:46.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.262 "is_configured": false, 00:18:46.262 "data_offset": 0, 00:18:46.262 "data_size": 0 00:18:46.262 } 00:18:46.262 ] 00:18:46.262 }' 00:18:46.262 10:44:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.262 10:44:12 -- common/autotest_common.sh@10 -- # set +x 00:18:46.833 10:44:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:47.400 [2024-07-24 10:44:13.785341] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.400 [2024-07-24 10:44:13.785407] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:18:47.400 10:44:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:47.400 [2024-07-24 10:44:14.065492] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.400 [2024-07-24 10:44:14.065620] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.400 [2024-07-24 10:44:14.065635] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.400 [2024-07-24 10:44:14.065668] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.400 [2024-07-24 10:44:14.065677] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.400 [2024-07-24 10:44:14.065698] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.400 [2024-07-24 10:44:14.065706] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.400 [2024-07-24 10:44:14.065735] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.400 10:44:14 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:47.658 [2024-07-24 10:44:14.340331] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.658 BaseBdev1 00:18:47.916 10:44:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:47.916 10:44:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:47.916 10:44:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.916 10:44:14 -- common/autotest_common.sh@889 -- # local i 00:18:47.916 10:44:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.916 10:44:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.916 10:44:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.916 10:44:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:48.174 [ 00:18:48.174 { 00:18:48.174 "name": "BaseBdev1", 00:18:48.174 "aliases": [ 00:18:48.174 "3af5be87-0880-40a3-8a3c-d22e0cec67fd" 00:18:48.174 ], 00:18:48.174 "product_name": "Malloc disk", 00:18:48.174 "block_size": 512, 00:18:48.174 "num_blocks": 65536, 00:18:48.174 "uuid": "3af5be87-0880-40a3-8a3c-d22e0cec67fd", 00:18:48.174 "assigned_rate_limits": { 00:18:48.174 "rw_ios_per_sec": 0, 00:18:48.174 "rw_mbytes_per_sec": 0, 00:18:48.174 "r_mbytes_per_sec": 0, 00:18:48.174 "w_mbytes_per_sec": 0 00:18:48.174 }, 00:18:48.174 "claimed": true, 00:18:48.174 "claim_type": "exclusive_write", 00:18:48.174 "zoned": false, 00:18:48.174 "supported_io_types": { 00:18:48.174 "read": true, 00:18:48.174 "write": true, 00:18:48.174 "unmap": true, 00:18:48.174 "write_zeroes": true, 00:18:48.174 "flush": true, 00:18:48.174 "reset": true, 00:18:48.174 "compare": false, 00:18:48.174 "compare_and_write": false, 00:18:48.174 "abort": true, 00:18:48.174 "nvme_admin": false, 00:18:48.174 "nvme_io": false 00:18:48.174 }, 00:18:48.174 "memory_domains": [ 00:18:48.174 { 00:18:48.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.174 "dma_device_type": 2 00:18:48.174 } 00:18:48.174 ], 00:18:48.174 "driver_specific": {} 00:18:48.174 } 00:18:48.174 ] 00:18:48.174 10:44:14 -- common/autotest_common.sh@895 -- # return 0 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.174 10:44:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.433 10:44:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.433 "name": "Existed_Raid", 00:18:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.433 "strip_size_kb": 0, 00:18:48.433 "state": "configuring", 00:18:48.433 "raid_level": "raid1", 00:18:48.433 "superblock": false, 00:18:48.433 "num_base_bdevs": 4, 00:18:48.433 "num_base_bdevs_discovered": 1, 00:18:48.433 "num_base_bdevs_operational": 4, 00:18:48.433 "base_bdevs_list": [ 00:18:48.433 { 00:18:48.433 "name": "BaseBdev1", 00:18:48.433 "uuid": "3af5be87-0880-40a3-8a3c-d22e0cec67fd", 00:18:48.433 "is_configured": true, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 65536 00:18:48.433 }, 00:18:48.433 { 00:18:48.433 "name": "BaseBdev2", 00:18:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.433 "is_configured": false, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 0 00:18:48.433 }, 00:18:48.433 { 00:18:48.433 "name": "BaseBdev3", 00:18:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.433 "is_configured": false, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 0 00:18:48.433 }, 00:18:48.433 { 00:18:48.433 "name": "BaseBdev4", 00:18:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.433 "is_configured": false, 00:18:48.433 "data_offset": 0, 00:18:48.433 "data_size": 0 00:18:48.433 } 00:18:48.433 ] 00:18:48.433 }' 00:18:48.433 10:44:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.433 10:44:15 -- common/autotest_common.sh@10 -- # set +x 00:18:49.004 10:44:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:49.263 [2024-07-24 10:44:15.924723] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:49.263 [2024-07-24 10:44:15.924842] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:18:49.263 10:44:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:49.263 10:44:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:49.521 [2024-07-24 10:44:16.160910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.521 [2024-07-24 10:44:16.163417] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:49.521 [2024-07-24 10:44:16.163555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:49.521 [2024-07-24 10:44:16.163572] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:49.521 [2024-07-24 10:44:16.163603] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:49.521 [2024-07-24 10:44:16.163613] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:49.521 [2024-07-24 10:44:16.163633] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.521 10:44:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.779 10:44:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.779 "name": "Existed_Raid", 00:18:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.779 "strip_size_kb": 0, 00:18:49.779 "state": "configuring", 00:18:49.779 "raid_level": "raid1", 00:18:49.779 "superblock": false, 00:18:49.779 "num_base_bdevs": 4, 00:18:49.779 "num_base_bdevs_discovered": 1, 00:18:49.779 "num_base_bdevs_operational": 4, 00:18:49.779 "base_bdevs_list": [ 00:18:49.779 { 00:18:49.779 "name": "BaseBdev1", 00:18:49.779 "uuid": "3af5be87-0880-40a3-8a3c-d22e0cec67fd", 00:18:49.779 "is_configured": true, 00:18:49.779 "data_offset": 0, 00:18:49.779 "data_size": 65536 00:18:49.779 }, 00:18:49.779 { 00:18:49.779 "name": "BaseBdev2", 00:18:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.779 "is_configured": false, 00:18:49.779 "data_offset": 0, 00:18:49.779 "data_size": 0 00:18:49.779 }, 00:18:49.779 { 00:18:49.779 "name": "BaseBdev3", 00:18:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.779 "is_configured": false, 00:18:49.779 "data_offset": 0, 00:18:49.779 "data_size": 0 00:18:49.779 }, 00:18:49.779 { 00:18:49.779 "name": "BaseBdev4", 00:18:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.779 "is_configured": false, 00:18:49.779 "data_offset": 0, 00:18:49.779 "data_size": 0 00:18:49.779 } 00:18:49.779 ] 00:18:49.779 }' 00:18:49.779 10:44:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.779 10:44:16 -- common/autotest_common.sh@10 -- # set +x 00:18:50.711 10:44:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:50.711 [2024-07-24 10:44:17.289043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:50.711 BaseBdev2 00:18:50.711 10:44:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:50.711 10:44:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:50.711 10:44:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:50.711 10:44:17 -- common/autotest_common.sh@889 -- # local i 00:18:50.711 10:44:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:50.711 10:44:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:50.711 10:44:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.969 10:44:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:51.226 [ 00:18:51.226 { 00:18:51.226 "name": "BaseBdev2", 00:18:51.226 "aliases": [ 00:18:51.226 "d6247d75-90a5-46c5-9e7e-854147d20c13" 00:18:51.226 ], 00:18:51.226 "product_name": "Malloc disk", 00:18:51.226 "block_size": 512, 00:18:51.226 "num_blocks": 65536, 00:18:51.226 "uuid": "d6247d75-90a5-46c5-9e7e-854147d20c13", 00:18:51.226 "assigned_rate_limits": { 00:18:51.226 "rw_ios_per_sec": 0, 00:18:51.226 "rw_mbytes_per_sec": 0, 00:18:51.226 "r_mbytes_per_sec": 0, 00:18:51.226 "w_mbytes_per_sec": 0 00:18:51.227 }, 00:18:51.227 "claimed": true, 00:18:51.227 "claim_type": "exclusive_write", 00:18:51.227 "zoned": false, 00:18:51.227 "supported_io_types": { 00:18:51.227 "read": true, 00:18:51.227 "write": true, 00:18:51.227 "unmap": true, 00:18:51.227 "write_zeroes": true, 00:18:51.227 "flush": true, 00:18:51.227 "reset": true, 00:18:51.227 "compare": false, 00:18:51.227 "compare_and_write": false, 00:18:51.227 "abort": true, 00:18:51.227 "nvme_admin": false, 00:18:51.227 "nvme_io": false 00:18:51.227 }, 00:18:51.227 "memory_domains": [ 00:18:51.227 { 00:18:51.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.227 "dma_device_type": 2 00:18:51.227 } 00:18:51.227 ], 00:18:51.227 "driver_specific": {} 00:18:51.227 } 00:18:51.227 ] 00:18:51.227 10:44:17 -- common/autotest_common.sh@895 -- # return 0 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.227 10:44:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.484 10:44:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.484 "name": "Existed_Raid", 00:18:51.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.484 "strip_size_kb": 0, 00:18:51.484 "state": "configuring", 00:18:51.484 "raid_level": "raid1", 00:18:51.484 "superblock": false, 00:18:51.484 "num_base_bdevs": 4, 00:18:51.484 "num_base_bdevs_discovered": 2, 00:18:51.484 "num_base_bdevs_operational": 4, 00:18:51.484 "base_bdevs_list": [ 00:18:51.484 { 00:18:51.484 "name": "BaseBdev1", 00:18:51.484 "uuid": "3af5be87-0880-40a3-8a3c-d22e0cec67fd", 00:18:51.484 "is_configured": true, 00:18:51.484 "data_offset": 0, 00:18:51.484 "data_size": 65536 00:18:51.484 }, 00:18:51.484 { 00:18:51.484 "name": "BaseBdev2", 00:18:51.484 "uuid": "d6247d75-90a5-46c5-9e7e-854147d20c13", 00:18:51.484 "is_configured": true, 00:18:51.484 "data_offset": 0, 00:18:51.484 "data_size": 65536 00:18:51.484 }, 00:18:51.484 { 00:18:51.484 "name": "BaseBdev3", 00:18:51.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.484 "is_configured": false, 00:18:51.484 "data_offset": 0, 00:18:51.484 "data_size": 0 00:18:51.484 }, 00:18:51.484 { 00:18:51.484 "name": "BaseBdev4", 00:18:51.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.484 "is_configured": false, 00:18:51.484 "data_offset": 0, 00:18:51.484 "data_size": 0 00:18:51.484 } 00:18:51.484 ] 00:18:51.484 }' 00:18:51.484 10:44:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.484 10:44:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.417 10:44:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:52.417 [2024-07-24 10:44:19.024422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.417 BaseBdev3 00:18:52.417 10:44:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:52.417 10:44:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:52.417 10:44:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:52.417 10:44:19 -- common/autotest_common.sh@889 -- # local i 00:18:52.417 10:44:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:52.417 10:44:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:52.417 10:44:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.674 10:44:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:52.932 [ 00:18:52.932 { 00:18:52.932 "name": "BaseBdev3", 00:18:52.932 "aliases": [ 00:18:52.932 "bd487c6d-7d23-4af7-89a8-c56e295af08f" 00:18:52.932 ], 00:18:52.932 "product_name": "Malloc disk", 00:18:52.932 "block_size": 512, 00:18:52.932 "num_blocks": 65536, 00:18:52.932 "uuid": "bd487c6d-7d23-4af7-89a8-c56e295af08f", 00:18:52.932 "assigned_rate_limits": { 00:18:52.932 "rw_ios_per_sec": 0, 00:18:52.932 "rw_mbytes_per_sec": 0, 00:18:52.932 "r_mbytes_per_sec": 0, 00:18:52.932 "w_mbytes_per_sec": 0 00:18:52.932 }, 00:18:52.932 "claimed": true, 00:18:52.932 "claim_type": "exclusive_write", 00:18:52.932 "zoned": false, 00:18:52.932 "supported_io_types": { 00:18:52.932 "read": true, 00:18:52.932 "write": true, 00:18:52.932 "unmap": true, 00:18:52.932 "write_zeroes": true, 00:18:52.932 "flush": true, 00:18:52.932 "reset": true, 00:18:52.932 "compare": false, 00:18:52.932 "compare_and_write": false, 00:18:52.932 "abort": true, 00:18:52.932 "nvme_admin": false, 00:18:52.932 "nvme_io": false 00:18:52.932 }, 00:18:52.932 "memory_domains": [ 00:18:52.932 { 00:18:52.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.932 "dma_device_type": 2 00:18:52.932 } 00:18:52.932 ], 00:18:52.933 "driver_specific": {} 00:18:52.933 } 00:18:52.933 ] 00:18:52.933 10:44:19 -- common/autotest_common.sh@895 -- # return 0 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.933 10:44:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.190 10:44:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.190 "name": "Existed_Raid", 00:18:53.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.190 "strip_size_kb": 0, 00:18:53.190 "state": "configuring", 00:18:53.190 "raid_level": "raid1", 00:18:53.190 "superblock": false, 00:18:53.190 "num_base_bdevs": 4, 00:18:53.190 "num_base_bdevs_discovered": 3, 00:18:53.190 "num_base_bdevs_operational": 4, 00:18:53.190 "base_bdevs_list": [ 00:18:53.190 { 00:18:53.190 "name": "BaseBdev1", 00:18:53.190 "uuid": "3af5be87-0880-40a3-8a3c-d22e0cec67fd", 00:18:53.190 "is_configured": true, 00:18:53.190 "data_offset": 0, 00:18:53.190 "data_size": 65536 00:18:53.190 }, 00:18:53.190 { 00:18:53.190 "name": "BaseBdev2", 00:18:53.190 "uuid": "d6247d75-90a5-46c5-9e7e-854147d20c13", 00:18:53.190 "is_configured": true, 00:18:53.190 "data_offset": 0, 00:18:53.190 "data_size": 65536 00:18:53.190 }, 00:18:53.190 { 00:18:53.190 "name": "BaseBdev3", 00:18:53.190 "uuid": "bd487c6d-7d23-4af7-89a8-c56e295af08f", 00:18:53.190 "is_configured": true, 00:18:53.190 "data_offset": 0, 00:18:53.190 "data_size": 65536 00:18:53.190 }, 00:18:53.190 { 00:18:53.190 "name": "BaseBdev4", 00:18:53.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.191 "is_configured": false, 00:18:53.191 "data_offset": 0, 00:18:53.191 "data_size": 0 00:18:53.191 } 00:18:53.191 ] 00:18:53.191 }' 00:18:53.191 10:44:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.191 10:44:19 -- common/autotest_common.sh@10 -- # set +x 00:18:54.123 10:44:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:54.123 [2024-07-24 10:44:20.700997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:54.123 [2024-07-24 10:44:20.701079] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:18:54.123 [2024-07-24 10:44:20.701092] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:54.123 [2024-07-24 10:44:20.701253] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:18:54.123 [2024-07-24 10:44:20.701727] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:18:54.123 [2024-07-24 10:44:20.701743] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:18:54.123 [2024-07-24 10:44:20.702008] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:54.123 BaseBdev4 00:18:54.123 10:44:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:54.123 10:44:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:54.123 10:44:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:54.123 10:44:20 -- common/autotest_common.sh@889 -- # local i 00:18:54.123 10:44:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:54.123 10:44:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:54.123 10:44:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.382 10:44:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:54.640 [ 00:18:54.640 { 00:18:54.640 "name": "BaseBdev4", 00:18:54.640 "aliases": [ 00:18:54.640 "ea8c5ad9-52ae-4df2-9080-3daf12584d97" 00:18:54.640 ], 00:18:54.640 "product_name": "Malloc disk", 00:18:54.640 "block_size": 512, 00:18:54.640 "num_blocks": 65536, 00:18:54.640 "uuid": "ea8c5ad9-52ae-4df2-9080-3daf12584d97", 00:18:54.640 "assigned_rate_limits": { 00:18:54.640 "rw_ios_per_sec": 0, 00:18:54.640 "rw_mbytes_per_sec": 0, 00:18:54.640 "r_mbytes_per_sec": 0, 00:18:54.640 "w_mbytes_per_sec": 0 00:18:54.640 }, 00:18:54.640 "claimed": true, 00:18:54.640 "claim_type": "exclusive_write", 00:18:54.640 "zoned": false, 00:18:54.640 "supported_io_types": { 00:18:54.640 "read": true, 00:18:54.640 "write": true, 00:18:54.640 "unmap": true, 00:18:54.640 "write_zeroes": true, 00:18:54.640 "flush": true, 00:18:54.640 "reset": true, 00:18:54.640 "compare": false, 00:18:54.640 "compare_and_write": false, 00:18:54.640 "abort": true, 00:18:54.640 "nvme_admin": false, 00:18:54.640 "nvme_io": false 00:18:54.640 }, 00:18:54.640 "memory_domains": [ 00:18:54.640 { 00:18:54.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.640 "dma_device_type": 2 00:18:54.640 } 00:18:54.640 ], 00:18:54.640 "driver_specific": {} 00:18:54.640 } 00:18:54.640 ] 00:18:54.640 10:44:21 -- common/autotest_common.sh@895 -- # return 0 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.640 10:44:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.898 10:44:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.898 "name": "Existed_Raid", 00:18:54.898 "uuid": "069b5bbf-b4fd-4835-bdd6-38ed07c09dd9", 00:18:54.898 "strip_size_kb": 0, 00:18:54.898 "state": "online", 00:18:54.898 "raid_level": "raid1", 00:18:54.898 "superblock": false, 00:18:54.898 "num_base_bdevs": 4, 00:18:54.898 "num_base_bdevs_discovered": 4, 00:18:54.898 "num_base_bdevs_operational": 4, 00:18:54.898 "base_bdevs_list": [ 00:18:54.898 { 00:18:54.898 "name": "BaseBdev1", 00:18:54.898 "uuid": "3af5be87-0880-40a3-8a3c-d22e0cec67fd", 00:18:54.898 "is_configured": true, 00:18:54.898 "data_offset": 0, 00:18:54.898 "data_size": 65536 00:18:54.898 }, 00:18:54.898 { 00:18:54.898 "name": "BaseBdev2", 00:18:54.898 "uuid": "d6247d75-90a5-46c5-9e7e-854147d20c13", 00:18:54.898 "is_configured": true, 00:18:54.898 "data_offset": 0, 00:18:54.898 "data_size": 65536 00:18:54.898 }, 00:18:54.898 { 00:18:54.898 "name": "BaseBdev3", 00:18:54.898 "uuid": "bd487c6d-7d23-4af7-89a8-c56e295af08f", 00:18:54.898 "is_configured": true, 00:18:54.898 "data_offset": 0, 00:18:54.898 "data_size": 65536 00:18:54.898 }, 00:18:54.898 { 00:18:54.898 "name": "BaseBdev4", 00:18:54.898 "uuid": "ea8c5ad9-52ae-4df2-9080-3daf12584d97", 00:18:54.898 "is_configured": true, 00:18:54.898 "data_offset": 0, 00:18:54.898 "data_size": 65536 00:18:54.898 } 00:18:54.898 ] 00:18:54.898 }' 00:18:54.898 10:44:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.898 10:44:21 -- common/autotest_common.sh@10 -- # set +x 00:18:55.464 10:44:22 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:55.722 [2024-07-24 10:44:22.221661] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.722 10:44:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.979 10:44:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.979 "name": "Existed_Raid", 00:18:55.979 "uuid": "069b5bbf-b4fd-4835-bdd6-38ed07c09dd9", 00:18:55.979 "strip_size_kb": 0, 00:18:55.979 "state": "online", 00:18:55.979 "raid_level": "raid1", 00:18:55.979 "superblock": false, 00:18:55.979 "num_base_bdevs": 4, 00:18:55.979 "num_base_bdevs_discovered": 3, 00:18:55.979 "num_base_bdevs_operational": 3, 00:18:55.979 "base_bdevs_list": [ 00:18:55.979 { 00:18:55.979 "name": null, 00:18:55.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.979 "is_configured": false, 00:18:55.979 "data_offset": 0, 00:18:55.979 "data_size": 65536 00:18:55.979 }, 00:18:55.980 { 00:18:55.980 "name": "BaseBdev2", 00:18:55.980 "uuid": "d6247d75-90a5-46c5-9e7e-854147d20c13", 00:18:55.980 "is_configured": true, 00:18:55.980 "data_offset": 0, 00:18:55.980 "data_size": 65536 00:18:55.980 }, 00:18:55.980 { 00:18:55.980 "name": "BaseBdev3", 00:18:55.980 "uuid": "bd487c6d-7d23-4af7-89a8-c56e295af08f", 00:18:55.980 "is_configured": true, 00:18:55.980 "data_offset": 0, 00:18:55.980 "data_size": 65536 00:18:55.980 }, 00:18:55.980 { 00:18:55.980 "name": "BaseBdev4", 00:18:55.980 "uuid": "ea8c5ad9-52ae-4df2-9080-3daf12584d97", 00:18:55.980 "is_configured": true, 00:18:55.980 "data_offset": 0, 00:18:55.980 "data_size": 65536 00:18:55.980 } 00:18:55.980 ] 00:18:55.980 }' 00:18:55.980 10:44:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.980 10:44:22 -- common/autotest_common.sh@10 -- # set +x 00:18:56.545 10:44:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:56.545 10:44:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:56.545 10:44:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.545 10:44:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:56.803 10:44:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:56.803 10:44:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.803 10:44:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:57.060 [2024-07-24 10:44:23.581106] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.060 10:44:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.060 10:44:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.060 10:44:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.060 10:44:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.316 10:44:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.316 10:44:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.316 10:44:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:57.574 [2024-07-24 10:44:24.103929] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.574 10:44:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:57.574 10:44:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.574 10:44:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.574 10:44:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.831 10:44:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.831 10:44:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.831 10:44:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:58.089 [2024-07-24 10:44:24.585169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:58.089 [2024-07-24 10:44:24.585218] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.089 [2024-07-24 10:44:24.585321] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.089 [2024-07-24 10:44:24.598784] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.089 [2024-07-24 10:44:24.598830] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:18:58.089 10:44:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.089 10:44:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.089 10:44:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.089 10:44:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.347 10:44:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:58.347 10:44:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:58.347 10:44:24 -- bdev/bdev_raid.sh@287 -- # killprocess 131731 00:18:58.347 10:44:24 -- common/autotest_common.sh@926 -- # '[' -z 131731 ']' 00:18:58.347 10:44:24 -- common/autotest_common.sh@930 -- # kill -0 131731 00:18:58.347 10:44:24 -- common/autotest_common.sh@931 -- # uname 00:18:58.347 10:44:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:58.347 10:44:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131731 00:18:58.347 10:44:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:58.347 10:44:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:58.347 10:44:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131731' 00:18:58.347 killing process with pid 131731 00:18:58.347 10:44:24 -- common/autotest_common.sh@945 -- # kill 131731 00:18:58.347 [2024-07-24 10:44:24.904194] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:58.347 10:44:24 -- common/autotest_common.sh@950 -- # wait 131731 00:18:58.347 [2024-07-24 10:44:24.904303] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:58.653 00:18:58.653 real 0m13.888s 00:18:58.653 user 0m25.620s 00:18:58.653 sys 0m1.836s 00:18:58.653 10:44:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.653 10:44:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.653 ************************************ 00:18:58.653 END TEST raid_state_function_test 00:18:58.653 ************************************ 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:58.653 10:44:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:58.653 10:44:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:58.653 10:44:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.653 ************************************ 00:18:58.653 START TEST raid_state_function_test_sb 00:18:58.653 ************************************ 00:18:58.653 10:44:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=132169 00:18:58.653 Process raid pid: 132169 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132169' 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132169 /var/tmp/spdk-raid.sock 00:18:58.653 10:44:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:58.653 10:44:25 -- common/autotest_common.sh@819 -- # '[' -z 132169 ']' 00:18:58.653 10:44:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:58.653 10:44:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:58.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:58.653 10:44:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:58.653 10:44:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:58.653 10:44:25 -- common/autotest_common.sh@10 -- # set +x 00:18:58.653 [2024-07-24 10:44:25.256379] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:58.653 [2024-07-24 10:44:25.256604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.919 [2024-07-24 10:44:25.405576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.919 [2024-07-24 10:44:25.506092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.919 [2024-07-24 10:44:25.562538] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:59.851 10:44:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:59.851 10:44:26 -- common/autotest_common.sh@852 -- # return 0 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:59.851 [2024-07-24 10:44:26.460624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.851 [2024-07-24 10:44:26.460950] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.851 [2024-07-24 10:44:26.461080] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.851 [2024-07-24 10:44:26.461224] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.851 [2024-07-24 10:44:26.461333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.851 [2024-07-24 10:44:26.461508] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.851 [2024-07-24 10:44:26.461621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:59.851 [2024-07-24 10:44:26.461692] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.851 10:44:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.108 10:44:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.108 "name": "Existed_Raid", 00:19:00.108 "uuid": "beceb576-4720-4fad-a156-84abb23f305b", 00:19:00.108 "strip_size_kb": 0, 00:19:00.108 "state": "configuring", 00:19:00.108 "raid_level": "raid1", 00:19:00.108 "superblock": true, 00:19:00.108 "num_base_bdevs": 4, 00:19:00.108 "num_base_bdevs_discovered": 0, 00:19:00.108 "num_base_bdevs_operational": 4, 00:19:00.108 "base_bdevs_list": [ 00:19:00.108 { 00:19:00.108 "name": "BaseBdev1", 00:19:00.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.108 "is_configured": false, 00:19:00.108 "data_offset": 0, 00:19:00.108 "data_size": 0 00:19:00.108 }, 00:19:00.108 { 00:19:00.108 "name": "BaseBdev2", 00:19:00.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.108 "is_configured": false, 00:19:00.108 "data_offset": 0, 00:19:00.108 "data_size": 0 00:19:00.108 }, 00:19:00.108 { 00:19:00.108 "name": "BaseBdev3", 00:19:00.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.108 "is_configured": false, 00:19:00.108 "data_offset": 0, 00:19:00.108 "data_size": 0 00:19:00.108 }, 00:19:00.108 { 00:19:00.108 "name": "BaseBdev4", 00:19:00.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.108 "is_configured": false, 00:19:00.108 "data_offset": 0, 00:19:00.108 "data_size": 0 00:19:00.108 } 00:19:00.108 ] 00:19:00.108 }' 00:19:00.108 10:44:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.108 10:44:26 -- common/autotest_common.sh@10 -- # set +x 00:19:01.042 10:44:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.042 [2024-07-24 10:44:27.648752] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.042 [2024-07-24 10:44:27.648812] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:19:01.042 10:44:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:01.301 [2024-07-24 10:44:27.928884] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.301 [2024-07-24 10:44:27.928972] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.301 [2024-07-24 10:44:27.928987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.301 [2024-07-24 10:44:27.929016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.301 [2024-07-24 10:44:27.929025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.301 [2024-07-24 10:44:27.929044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.301 [2024-07-24 10:44:27.929051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:01.301 [2024-07-24 10:44:27.929077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:01.301 10:44:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:01.561 [2024-07-24 10:44:28.216402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.561 BaseBdev1 00:19:01.561 10:44:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:01.561 10:44:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:01.561 10:44:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:01.561 10:44:28 -- common/autotest_common.sh@889 -- # local i 00:19:01.561 10:44:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:01.561 10:44:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:01.561 10:44:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:02.129 10:44:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:02.129 [ 00:19:02.129 { 00:19:02.129 "name": "BaseBdev1", 00:19:02.129 "aliases": [ 00:19:02.129 "18052e1e-fc47-4702-afc2-aba9892d1cf8" 00:19:02.129 ], 00:19:02.129 "product_name": "Malloc disk", 00:19:02.129 "block_size": 512, 00:19:02.129 "num_blocks": 65536, 00:19:02.129 "uuid": "18052e1e-fc47-4702-afc2-aba9892d1cf8", 00:19:02.129 "assigned_rate_limits": { 00:19:02.129 "rw_ios_per_sec": 0, 00:19:02.129 "rw_mbytes_per_sec": 0, 00:19:02.129 "r_mbytes_per_sec": 0, 00:19:02.129 "w_mbytes_per_sec": 0 00:19:02.129 }, 00:19:02.129 "claimed": true, 00:19:02.129 "claim_type": "exclusive_write", 00:19:02.129 "zoned": false, 00:19:02.129 "supported_io_types": { 00:19:02.129 "read": true, 00:19:02.129 "write": true, 00:19:02.129 "unmap": true, 00:19:02.129 "write_zeroes": true, 00:19:02.129 "flush": true, 00:19:02.129 "reset": true, 00:19:02.129 "compare": false, 00:19:02.129 "compare_and_write": false, 00:19:02.129 "abort": true, 00:19:02.129 "nvme_admin": false, 00:19:02.129 "nvme_io": false 00:19:02.129 }, 00:19:02.129 "memory_domains": [ 00:19:02.129 { 00:19:02.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.129 "dma_device_type": 2 00:19:02.129 } 00:19:02.129 ], 00:19:02.129 "driver_specific": {} 00:19:02.129 } 00:19:02.129 ] 00:19:02.389 10:44:28 -- common/autotest_common.sh@895 -- # return 0 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.389 10:44:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.647 10:44:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:02.647 "name": "Existed_Raid", 00:19:02.647 "uuid": "b8631cf2-2b92-4429-9059-e3c6c1dfea2f", 00:19:02.647 "strip_size_kb": 0, 00:19:02.647 "state": "configuring", 00:19:02.647 "raid_level": "raid1", 00:19:02.647 "superblock": true, 00:19:02.647 "num_base_bdevs": 4, 00:19:02.647 "num_base_bdevs_discovered": 1, 00:19:02.647 "num_base_bdevs_operational": 4, 00:19:02.647 "base_bdevs_list": [ 00:19:02.647 { 00:19:02.647 "name": "BaseBdev1", 00:19:02.647 "uuid": "18052e1e-fc47-4702-afc2-aba9892d1cf8", 00:19:02.647 "is_configured": true, 00:19:02.647 "data_offset": 2048, 00:19:02.647 "data_size": 63488 00:19:02.647 }, 00:19:02.647 { 00:19:02.647 "name": "BaseBdev2", 00:19:02.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.647 "is_configured": false, 00:19:02.647 "data_offset": 0, 00:19:02.647 "data_size": 0 00:19:02.647 }, 00:19:02.647 { 00:19:02.647 "name": "BaseBdev3", 00:19:02.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.647 "is_configured": false, 00:19:02.647 "data_offset": 0, 00:19:02.647 "data_size": 0 00:19:02.647 }, 00:19:02.647 { 00:19:02.647 "name": "BaseBdev4", 00:19:02.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.647 "is_configured": false, 00:19:02.647 "data_offset": 0, 00:19:02.647 "data_size": 0 00:19:02.647 } 00:19:02.647 ] 00:19:02.647 }' 00:19:02.647 10:44:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:02.647 10:44:29 -- common/autotest_common.sh@10 -- # set +x 00:19:03.214 10:44:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.473 [2024-07-24 10:44:30.052866] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.473 [2024-07-24 10:44:30.053000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:19:03.473 10:44:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:03.473 10:44:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:03.745 10:44:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:04.036 BaseBdev1 00:19:04.036 10:44:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:04.036 10:44:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:04.036 10:44:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:04.036 10:44:30 -- common/autotest_common.sh@889 -- # local i 00:19:04.036 10:44:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:04.036 10:44:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:04.036 10:44:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.294 10:44:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:04.552 [ 00:19:04.552 { 00:19:04.552 "name": "BaseBdev1", 00:19:04.552 "aliases": [ 00:19:04.552 "a1ebcd52-4009-4f68-92ba-433a82a71b13" 00:19:04.552 ], 00:19:04.552 "product_name": "Malloc disk", 00:19:04.552 "block_size": 512, 00:19:04.552 "num_blocks": 65536, 00:19:04.552 "uuid": "a1ebcd52-4009-4f68-92ba-433a82a71b13", 00:19:04.552 "assigned_rate_limits": { 00:19:04.552 "rw_ios_per_sec": 0, 00:19:04.552 "rw_mbytes_per_sec": 0, 00:19:04.552 "r_mbytes_per_sec": 0, 00:19:04.552 "w_mbytes_per_sec": 0 00:19:04.552 }, 00:19:04.552 "claimed": false, 00:19:04.552 "zoned": false, 00:19:04.552 "supported_io_types": { 00:19:04.552 "read": true, 00:19:04.552 "write": true, 00:19:04.552 "unmap": true, 00:19:04.552 "write_zeroes": true, 00:19:04.552 "flush": true, 00:19:04.552 "reset": true, 00:19:04.552 "compare": false, 00:19:04.552 "compare_and_write": false, 00:19:04.552 "abort": true, 00:19:04.552 "nvme_admin": false, 00:19:04.552 "nvme_io": false 00:19:04.552 }, 00:19:04.552 "memory_domains": [ 00:19:04.552 { 00:19:04.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.552 "dma_device_type": 2 00:19:04.552 } 00:19:04.552 ], 00:19:04.552 "driver_specific": {} 00:19:04.552 } 00:19:04.552 ] 00:19:04.552 10:44:31 -- common/autotest_common.sh@895 -- # return 0 00:19:04.553 10:44:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:04.810 [2024-07-24 10:44:31.454480] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:04.810 [2024-07-24 10:44:31.456893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.810 [2024-07-24 10:44:31.457009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.810 [2024-07-24 10:44:31.457027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:04.810 [2024-07-24 10:44:31.457061] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:04.810 [2024-07-24 10:44:31.457074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:04.810 [2024-07-24 10:44:31.457098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:04.811 10:44:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.069 10:44:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.069 "name": "Existed_Raid", 00:19:05.069 "uuid": "b3f47de9-500c-49fa-a43f-c72fbddd04ff", 00:19:05.069 "strip_size_kb": 0, 00:19:05.069 "state": "configuring", 00:19:05.069 "raid_level": "raid1", 00:19:05.069 "superblock": true, 00:19:05.069 "num_base_bdevs": 4, 00:19:05.069 "num_base_bdevs_discovered": 1, 00:19:05.069 "num_base_bdevs_operational": 4, 00:19:05.069 "base_bdevs_list": [ 00:19:05.069 { 00:19:05.069 "name": "BaseBdev1", 00:19:05.069 "uuid": "a1ebcd52-4009-4f68-92ba-433a82a71b13", 00:19:05.069 "is_configured": true, 00:19:05.069 "data_offset": 2048, 00:19:05.069 "data_size": 63488 00:19:05.069 }, 00:19:05.069 { 00:19:05.069 "name": "BaseBdev2", 00:19:05.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.069 "is_configured": false, 00:19:05.069 "data_offset": 0, 00:19:05.069 "data_size": 0 00:19:05.069 }, 00:19:05.070 { 00:19:05.070 "name": "BaseBdev3", 00:19:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.070 "is_configured": false, 00:19:05.070 "data_offset": 0, 00:19:05.070 "data_size": 0 00:19:05.070 }, 00:19:05.070 { 00:19:05.070 "name": "BaseBdev4", 00:19:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.070 "is_configured": false, 00:19:05.070 "data_offset": 0, 00:19:05.070 "data_size": 0 00:19:05.070 } 00:19:05.070 ] 00:19:05.070 }' 00:19:05.070 10:44:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.070 10:44:31 -- common/autotest_common.sh@10 -- # set +x 00:19:06.004 10:44:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:06.004 [2024-07-24 10:44:32.671592] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:06.004 BaseBdev2 00:19:06.004 10:44:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:06.004 10:44:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:06.004 10:44:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:06.004 10:44:32 -- common/autotest_common.sh@889 -- # local i 00:19:06.004 10:44:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:06.005 10:44:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:06.005 10:44:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:06.572 10:44:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:06.572 [ 00:19:06.572 { 00:19:06.572 "name": "BaseBdev2", 00:19:06.572 "aliases": [ 00:19:06.572 "512cc976-a35f-4093-8eca-e0306d8de4f1" 00:19:06.572 ], 00:19:06.572 "product_name": "Malloc disk", 00:19:06.572 "block_size": 512, 00:19:06.572 "num_blocks": 65536, 00:19:06.572 "uuid": "512cc976-a35f-4093-8eca-e0306d8de4f1", 00:19:06.572 "assigned_rate_limits": { 00:19:06.572 "rw_ios_per_sec": 0, 00:19:06.572 "rw_mbytes_per_sec": 0, 00:19:06.572 "r_mbytes_per_sec": 0, 00:19:06.572 "w_mbytes_per_sec": 0 00:19:06.572 }, 00:19:06.572 "claimed": true, 00:19:06.572 "claim_type": "exclusive_write", 00:19:06.572 "zoned": false, 00:19:06.572 "supported_io_types": { 00:19:06.572 "read": true, 00:19:06.572 "write": true, 00:19:06.572 "unmap": true, 00:19:06.572 "write_zeroes": true, 00:19:06.572 "flush": true, 00:19:06.572 "reset": true, 00:19:06.572 "compare": false, 00:19:06.572 "compare_and_write": false, 00:19:06.572 "abort": true, 00:19:06.572 "nvme_admin": false, 00:19:06.572 "nvme_io": false 00:19:06.572 }, 00:19:06.572 "memory_domains": [ 00:19:06.572 { 00:19:06.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.572 "dma_device_type": 2 00:19:06.572 } 00:19:06.572 ], 00:19:06.572 "driver_specific": {} 00:19:06.572 } 00:19:06.572 ] 00:19:06.572 10:44:33 -- common/autotest_common.sh@895 -- # return 0 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.572 10:44:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.831 10:44:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.831 "name": "Existed_Raid", 00:19:06.831 "uuid": "b3f47de9-500c-49fa-a43f-c72fbddd04ff", 00:19:06.831 "strip_size_kb": 0, 00:19:06.831 "state": "configuring", 00:19:06.831 "raid_level": "raid1", 00:19:06.831 "superblock": true, 00:19:06.831 "num_base_bdevs": 4, 00:19:06.831 "num_base_bdevs_discovered": 2, 00:19:06.831 "num_base_bdevs_operational": 4, 00:19:06.831 "base_bdevs_list": [ 00:19:06.831 { 00:19:06.831 "name": "BaseBdev1", 00:19:06.831 "uuid": "a1ebcd52-4009-4f68-92ba-433a82a71b13", 00:19:06.831 "is_configured": true, 00:19:06.831 "data_offset": 2048, 00:19:06.831 "data_size": 63488 00:19:06.831 }, 00:19:06.831 { 00:19:06.831 "name": "BaseBdev2", 00:19:06.831 "uuid": "512cc976-a35f-4093-8eca-e0306d8de4f1", 00:19:06.831 "is_configured": true, 00:19:06.831 "data_offset": 2048, 00:19:06.831 "data_size": 63488 00:19:06.831 }, 00:19:06.831 { 00:19:06.831 "name": "BaseBdev3", 00:19:06.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.831 "is_configured": false, 00:19:06.831 "data_offset": 0, 00:19:06.831 "data_size": 0 00:19:06.831 }, 00:19:06.831 { 00:19:06.831 "name": "BaseBdev4", 00:19:06.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.831 "is_configured": false, 00:19:06.831 "data_offset": 0, 00:19:06.831 "data_size": 0 00:19:06.831 } 00:19:06.831 ] 00:19:06.831 }' 00:19:06.831 10:44:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.831 10:44:33 -- common/autotest_common.sh@10 -- # set +x 00:19:07.765 10:44:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:07.765 [2024-07-24 10:44:34.321040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:07.765 BaseBdev3 00:19:07.765 10:44:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:07.765 10:44:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:07.765 10:44:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:07.765 10:44:34 -- common/autotest_common.sh@889 -- # local i 00:19:07.765 10:44:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:07.765 10:44:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:07.765 10:44:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.023 10:44:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:08.282 [ 00:19:08.282 { 00:19:08.282 "name": "BaseBdev3", 00:19:08.282 "aliases": [ 00:19:08.282 "85b7a114-aeb0-43ed-a653-7e60fd45f763" 00:19:08.282 ], 00:19:08.282 "product_name": "Malloc disk", 00:19:08.282 "block_size": 512, 00:19:08.282 "num_blocks": 65536, 00:19:08.282 "uuid": "85b7a114-aeb0-43ed-a653-7e60fd45f763", 00:19:08.282 "assigned_rate_limits": { 00:19:08.282 "rw_ios_per_sec": 0, 00:19:08.282 "rw_mbytes_per_sec": 0, 00:19:08.282 "r_mbytes_per_sec": 0, 00:19:08.282 "w_mbytes_per_sec": 0 00:19:08.282 }, 00:19:08.282 "claimed": true, 00:19:08.282 "claim_type": "exclusive_write", 00:19:08.282 "zoned": false, 00:19:08.282 "supported_io_types": { 00:19:08.282 "read": true, 00:19:08.282 "write": true, 00:19:08.282 "unmap": true, 00:19:08.282 "write_zeroes": true, 00:19:08.282 "flush": true, 00:19:08.282 "reset": true, 00:19:08.282 "compare": false, 00:19:08.282 "compare_and_write": false, 00:19:08.282 "abort": true, 00:19:08.282 "nvme_admin": false, 00:19:08.282 "nvme_io": false 00:19:08.282 }, 00:19:08.282 "memory_domains": [ 00:19:08.282 { 00:19:08.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.282 "dma_device_type": 2 00:19:08.282 } 00:19:08.282 ], 00:19:08.282 "driver_specific": {} 00:19:08.282 } 00:19:08.282 ] 00:19:08.282 10:44:34 -- common/autotest_common.sh@895 -- # return 0 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.282 10:44:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.541 10:44:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.541 "name": "Existed_Raid", 00:19:08.541 "uuid": "b3f47de9-500c-49fa-a43f-c72fbddd04ff", 00:19:08.541 "strip_size_kb": 0, 00:19:08.541 "state": "configuring", 00:19:08.541 "raid_level": "raid1", 00:19:08.541 "superblock": true, 00:19:08.541 "num_base_bdevs": 4, 00:19:08.541 "num_base_bdevs_discovered": 3, 00:19:08.541 "num_base_bdevs_operational": 4, 00:19:08.541 "base_bdevs_list": [ 00:19:08.541 { 00:19:08.541 "name": "BaseBdev1", 00:19:08.541 "uuid": "a1ebcd52-4009-4f68-92ba-433a82a71b13", 00:19:08.541 "is_configured": true, 00:19:08.541 "data_offset": 2048, 00:19:08.541 "data_size": 63488 00:19:08.541 }, 00:19:08.541 { 00:19:08.541 "name": "BaseBdev2", 00:19:08.541 "uuid": "512cc976-a35f-4093-8eca-e0306d8de4f1", 00:19:08.541 "is_configured": true, 00:19:08.541 "data_offset": 2048, 00:19:08.541 "data_size": 63488 00:19:08.541 }, 00:19:08.541 { 00:19:08.541 "name": "BaseBdev3", 00:19:08.541 "uuid": "85b7a114-aeb0-43ed-a653-7e60fd45f763", 00:19:08.541 "is_configured": true, 00:19:08.541 "data_offset": 2048, 00:19:08.541 "data_size": 63488 00:19:08.541 }, 00:19:08.541 { 00:19:08.541 "name": "BaseBdev4", 00:19:08.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.541 "is_configured": false, 00:19:08.541 "data_offset": 0, 00:19:08.541 "data_size": 0 00:19:08.541 } 00:19:08.541 ] 00:19:08.541 }' 00:19:08.541 10:44:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.541 10:44:35 -- common/autotest_common.sh@10 -- # set +x 00:19:09.477 10:44:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:09.477 [2024-07-24 10:44:36.122921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:09.477 [2024-07-24 10:44:36.123229] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:19:09.477 [2024-07-24 10:44:36.123246] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:09.477 [2024-07-24 10:44:36.123387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:19:09.477 [2024-07-24 10:44:36.123859] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:19:09.477 [2024-07-24 10:44:36.123885] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:19:09.477 [2024-07-24 10:44:36.124044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.477 BaseBdev4 00:19:09.477 10:44:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:09.477 10:44:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:09.477 10:44:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:09.477 10:44:36 -- common/autotest_common.sh@889 -- # local i 00:19:09.477 10:44:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:09.477 10:44:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:09.477 10:44:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:10.044 10:44:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:10.044 [ 00:19:10.044 { 00:19:10.044 "name": "BaseBdev4", 00:19:10.044 "aliases": [ 00:19:10.044 "da38fcdc-3701-444a-8482-a669823c6a16" 00:19:10.044 ], 00:19:10.044 "product_name": "Malloc disk", 00:19:10.044 "block_size": 512, 00:19:10.044 "num_blocks": 65536, 00:19:10.044 "uuid": "da38fcdc-3701-444a-8482-a669823c6a16", 00:19:10.044 "assigned_rate_limits": { 00:19:10.044 "rw_ios_per_sec": 0, 00:19:10.044 "rw_mbytes_per_sec": 0, 00:19:10.044 "r_mbytes_per_sec": 0, 00:19:10.044 "w_mbytes_per_sec": 0 00:19:10.044 }, 00:19:10.044 "claimed": true, 00:19:10.044 "claim_type": "exclusive_write", 00:19:10.044 "zoned": false, 00:19:10.044 "supported_io_types": { 00:19:10.044 "read": true, 00:19:10.044 "write": true, 00:19:10.044 "unmap": true, 00:19:10.044 "write_zeroes": true, 00:19:10.044 "flush": true, 00:19:10.044 "reset": true, 00:19:10.044 "compare": false, 00:19:10.044 "compare_and_write": false, 00:19:10.044 "abort": true, 00:19:10.044 "nvme_admin": false, 00:19:10.044 "nvme_io": false 00:19:10.044 }, 00:19:10.044 "memory_domains": [ 00:19:10.044 { 00:19:10.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.044 "dma_device_type": 2 00:19:10.044 } 00:19:10.044 ], 00:19:10.044 "driver_specific": {} 00:19:10.044 } 00:19:10.044 ] 00:19:10.044 10:44:36 -- common/autotest_common.sh@895 -- # return 0 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.044 10:44:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.611 10:44:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.611 "name": "Existed_Raid", 00:19:10.611 "uuid": "b3f47de9-500c-49fa-a43f-c72fbddd04ff", 00:19:10.611 "strip_size_kb": 0, 00:19:10.611 "state": "online", 00:19:10.611 "raid_level": "raid1", 00:19:10.611 "superblock": true, 00:19:10.611 "num_base_bdevs": 4, 00:19:10.611 "num_base_bdevs_discovered": 4, 00:19:10.611 "num_base_bdevs_operational": 4, 00:19:10.611 "base_bdevs_list": [ 00:19:10.611 { 00:19:10.611 "name": "BaseBdev1", 00:19:10.611 "uuid": "a1ebcd52-4009-4f68-92ba-433a82a71b13", 00:19:10.611 "is_configured": true, 00:19:10.611 "data_offset": 2048, 00:19:10.611 "data_size": 63488 00:19:10.611 }, 00:19:10.611 { 00:19:10.611 "name": "BaseBdev2", 00:19:10.611 "uuid": "512cc976-a35f-4093-8eca-e0306d8de4f1", 00:19:10.611 "is_configured": true, 00:19:10.611 "data_offset": 2048, 00:19:10.611 "data_size": 63488 00:19:10.611 }, 00:19:10.611 { 00:19:10.611 "name": "BaseBdev3", 00:19:10.611 "uuid": "85b7a114-aeb0-43ed-a653-7e60fd45f763", 00:19:10.611 "is_configured": true, 00:19:10.611 "data_offset": 2048, 00:19:10.611 "data_size": 63488 00:19:10.611 }, 00:19:10.611 { 00:19:10.611 "name": "BaseBdev4", 00:19:10.611 "uuid": "da38fcdc-3701-444a-8482-a669823c6a16", 00:19:10.611 "is_configured": true, 00:19:10.611 "data_offset": 2048, 00:19:10.611 "data_size": 63488 00:19:10.611 } 00:19:10.611 ] 00:19:10.611 }' 00:19:10.611 10:44:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.611 10:44:37 -- common/autotest_common.sh@10 -- # set +x 00:19:11.178 10:44:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:11.436 [2024-07-24 10:44:38.007455] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.436 10:44:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.694 10:44:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.694 "name": "Existed_Raid", 00:19:11.694 "uuid": "b3f47de9-500c-49fa-a43f-c72fbddd04ff", 00:19:11.694 "strip_size_kb": 0, 00:19:11.694 "state": "online", 00:19:11.694 "raid_level": "raid1", 00:19:11.694 "superblock": true, 00:19:11.694 "num_base_bdevs": 4, 00:19:11.694 "num_base_bdevs_discovered": 3, 00:19:11.694 "num_base_bdevs_operational": 3, 00:19:11.694 "base_bdevs_list": [ 00:19:11.694 { 00:19:11.694 "name": null, 00:19:11.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.694 "is_configured": false, 00:19:11.694 "data_offset": 2048, 00:19:11.694 "data_size": 63488 00:19:11.694 }, 00:19:11.694 { 00:19:11.694 "name": "BaseBdev2", 00:19:11.694 "uuid": "512cc976-a35f-4093-8eca-e0306d8de4f1", 00:19:11.694 "is_configured": true, 00:19:11.694 "data_offset": 2048, 00:19:11.694 "data_size": 63488 00:19:11.694 }, 00:19:11.694 { 00:19:11.694 "name": "BaseBdev3", 00:19:11.694 "uuid": "85b7a114-aeb0-43ed-a653-7e60fd45f763", 00:19:11.694 "is_configured": true, 00:19:11.694 "data_offset": 2048, 00:19:11.694 "data_size": 63488 00:19:11.694 }, 00:19:11.694 { 00:19:11.694 "name": "BaseBdev4", 00:19:11.694 "uuid": "da38fcdc-3701-444a-8482-a669823c6a16", 00:19:11.694 "is_configured": true, 00:19:11.694 "data_offset": 2048, 00:19:11.694 "data_size": 63488 00:19:11.694 } 00:19:11.694 ] 00:19:11.694 }' 00:19:11.694 10:44:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.694 10:44:38 -- common/autotest_common.sh@10 -- # set +x 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:12.630 10:44:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:12.888 [2024-07-24 10:44:39.529574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:12.888 10:44:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:12.888 10:44:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:12.888 10:44:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.888 10:44:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:13.455 10:44:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:13.455 10:44:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.455 10:44:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:13.455 [2024-07-24 10:44:40.068032] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:13.455 10:44:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:13.455 10:44:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:13.455 10:44:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.455 10:44:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:13.713 10:44:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:13.713 10:44:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.713 10:44:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:13.971 [2024-07-24 10:44:40.577163] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:13.971 [2024-07-24 10:44:40.577212] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.971 [2024-07-24 10:44:40.577300] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.971 [2024-07-24 10:44:40.590986] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:13.971 [2024-07-24 10:44:40.591038] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:19:13.971 10:44:40 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:13.971 10:44:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:13.971 10:44:40 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.971 10:44:40 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:14.229 10:44:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:14.229 10:44:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:14.229 10:44:40 -- bdev/bdev_raid.sh@287 -- # killprocess 132169 00:19:14.229 10:44:40 -- common/autotest_common.sh@926 -- # '[' -z 132169 ']' 00:19:14.229 10:44:40 -- common/autotest_common.sh@930 -- # kill -0 132169 00:19:14.229 10:44:40 -- common/autotest_common.sh@931 -- # uname 00:19:14.229 10:44:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:14.229 10:44:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132169 00:19:14.229 10:44:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:14.229 10:44:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:14.229 10:44:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132169' 00:19:14.229 killing process with pid 132169 00:19:14.229 10:44:40 -- common/autotest_common.sh@945 -- # kill 132169 00:19:14.229 10:44:40 -- common/autotest_common.sh@950 -- # wait 132169 00:19:14.229 [2024-07-24 10:44:40.876111] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.229 [2024-07-24 10:44:40.876199] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.487 10:44:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:14.487 00:19:14.487 real 0m15.923s 00:19:14.487 user 0m29.490s 00:19:14.487 sys 0m2.051s 00:19:14.487 10:44:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.487 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:19:14.487 ************************************ 00:19:14.487 END TEST raid_state_function_test_sb 00:19:14.487 ************************************ 00:19:14.487 10:44:41 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:14.487 10:44:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:14.487 10:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:14.487 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:19:14.747 ************************************ 00:19:14.747 START TEST raid_superblock_test 00:19:14.747 ************************************ 00:19:14.747 10:44:41 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@357 -- # raid_pid=132631 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:14.747 10:44:41 -- bdev/bdev_raid.sh@358 -- # waitforlisten 132631 /var/tmp/spdk-raid.sock 00:19:14.747 10:44:41 -- common/autotest_common.sh@819 -- # '[' -z 132631 ']' 00:19:14.747 10:44:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:14.747 10:44:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.747 10:44:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:14.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:14.747 10:44:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.747 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:19:14.747 [2024-07-24 10:44:41.236654] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:14.747 [2024-07-24 10:44:41.236861] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132631 ] 00:19:14.747 [2024-07-24 10:44:41.374276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.009 [2024-07-24 10:44:41.468714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.009 [2024-07-24 10:44:41.522703] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:15.575 10:44:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:15.575 10:44:42 -- common/autotest_common.sh@852 -- # return 0 00:19:15.575 10:44:42 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:15.576 10:44:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:15.833 malloc1 00:19:15.833 10:44:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:16.091 [2024-07-24 10:44:42.709808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:16.091 [2024-07-24 10:44:42.709972] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.091 [2024-07-24 10:44:42.710030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:19:16.091 [2024-07-24 10:44:42.710106] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.091 [2024-07-24 10:44:42.713110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.091 [2024-07-24 10:44:42.713172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:16.091 pt1 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:16.091 10:44:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:16.348 malloc2 00:19:16.348 10:44:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:16.606 [2024-07-24 10:44:43.238023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:16.606 [2024-07-24 10:44:43.238127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.606 [2024-07-24 10:44:43.238174] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:19:16.606 [2024-07-24 10:44:43.238225] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.606 [2024-07-24 10:44:43.241033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.606 [2024-07-24 10:44:43.241090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:16.606 pt2 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:16.606 10:44:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:17.171 malloc3 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:17.171 [2024-07-24 10:44:43.809157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:17.171 [2024-07-24 10:44:43.809265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.171 [2024-07-24 10:44:43.809313] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.171 [2024-07-24 10:44:43.809363] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.171 [2024-07-24 10:44:43.811978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.171 [2024-07-24 10:44:43.812034] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:17.171 pt3 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.171 10:44:43 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:17.428 malloc4 00:19:17.428 10:44:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:17.686 [2024-07-24 10:44:44.328571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:17.686 [2024-07-24 10:44:44.328688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.686 [2024-07-24 10:44:44.328731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.686 [2024-07-24 10:44:44.328788] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.686 [2024-07-24 10:44:44.331426] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.686 [2024-07-24 10:44:44.331493] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:17.686 pt4 00:19:17.686 10:44:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:17.686 10:44:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:17.686 10:44:44 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:17.944 [2024-07-24 10:44:44.556748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:17.944 [2024-07-24 10:44:44.559138] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:17.944 [2024-07-24 10:44:44.559230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:17.944 [2024-07-24 10:44:44.559290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:17.944 [2024-07-24 10:44:44.559605] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:17.944 [2024-07-24 10:44:44.559643] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:17.944 [2024-07-24 10:44:44.559811] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:19:17.944 [2024-07-24 10:44:44.560373] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:17.944 [2024-07-24 10:44:44.560400] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:17.944 [2024-07-24 10:44:44.560616] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.944 10:44:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.202 10:44:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:18.202 "name": "raid_bdev1", 00:19:18.202 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:18.202 "strip_size_kb": 0, 00:19:18.202 "state": "online", 00:19:18.202 "raid_level": "raid1", 00:19:18.202 "superblock": true, 00:19:18.202 "num_base_bdevs": 4, 00:19:18.202 "num_base_bdevs_discovered": 4, 00:19:18.202 "num_base_bdevs_operational": 4, 00:19:18.202 "base_bdevs_list": [ 00:19:18.202 { 00:19:18.202 "name": "pt1", 00:19:18.202 "uuid": "5b656f53-5eb5-5580-97c6-5bda36924ad6", 00:19:18.202 "is_configured": true, 00:19:18.202 "data_offset": 2048, 00:19:18.202 "data_size": 63488 00:19:18.202 }, 00:19:18.202 { 00:19:18.202 "name": "pt2", 00:19:18.202 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:18.202 "is_configured": true, 00:19:18.202 "data_offset": 2048, 00:19:18.202 "data_size": 63488 00:19:18.202 }, 00:19:18.202 { 00:19:18.202 "name": "pt3", 00:19:18.202 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:18.202 "is_configured": true, 00:19:18.202 "data_offset": 2048, 00:19:18.202 "data_size": 63488 00:19:18.202 }, 00:19:18.202 { 00:19:18.202 "name": "pt4", 00:19:18.202 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:18.203 "is_configured": true, 00:19:18.203 "data_offset": 2048, 00:19:18.203 "data_size": 63488 00:19:18.203 } 00:19:18.203 ] 00:19:18.203 }' 00:19:18.203 10:44:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:18.203 10:44:44 -- common/autotest_common.sh@10 -- # set +x 00:19:19.137 10:44:45 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:19.137 10:44:45 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:19.137 [2024-07-24 10:44:45.741261] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.137 10:44:45 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=eceeb304-d251-4e36-ba16-02f783943734 00:19:19.137 10:44:45 -- bdev/bdev_raid.sh@380 -- # '[' -z eceeb304-d251-4e36-ba16-02f783943734 ']' 00:19:19.137 10:44:45 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:19.394 [2024-07-24 10:44:45.984985] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.395 [2024-07-24 10:44:45.985038] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.395 [2024-07-24 10:44:45.985161] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.395 [2024-07-24 10:44:45.985278] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.395 [2024-07-24 10:44:45.985293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:19.395 10:44:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.395 10:44:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:19.652 10:44:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:19.652 10:44:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:19.652 10:44:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.652 10:44:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:19.910 10:44:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.910 10:44:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:20.167 10:44:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.167 10:44:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:20.424 10:44:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:20.424 10:44:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:20.683 10:44:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:20.683 10:44:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:20.941 10:44:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:20.941 10:44:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:20.941 10:44:47 -- common/autotest_common.sh@640 -- # local es=0 00:19:20.941 10:44:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:20.941 10:44:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.941 10:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:20.941 10:44:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.941 10:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:20.941 10:44:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.941 10:44:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:20.941 10:44:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.941 10:44:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:20.941 10:44:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:21.198 [2024-07-24 10:44:47.661405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:21.198 [2024-07-24 10:44:47.664103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:21.198 [2024-07-24 10:44:47.664187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:21.198 [2024-07-24 10:44:47.664253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:21.198 [2024-07-24 10:44:47.664351] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:21.198 [2024-07-24 10:44:47.664529] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:21.198 [2024-07-24 10:44:47.664580] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:21.198 [2024-07-24 10:44:47.664654] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:21.198 [2024-07-24 10:44:47.664717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.198 [2024-07-24 10:44:47.664732] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:19:21.198 request: 00:19:21.198 { 00:19:21.198 "name": "raid_bdev1", 00:19:21.198 "raid_level": "raid1", 00:19:21.198 "base_bdevs": [ 00:19:21.198 "malloc1", 00:19:21.198 "malloc2", 00:19:21.198 "malloc3", 00:19:21.198 "malloc4" 00:19:21.198 ], 00:19:21.198 "superblock": false, 00:19:21.198 "method": "bdev_raid_create", 00:19:21.198 "req_id": 1 00:19:21.198 } 00:19:21.198 Got JSON-RPC error response 00:19:21.198 response: 00:19:21.198 { 00:19:21.198 "code": -17, 00:19:21.198 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:21.198 } 00:19:21.198 10:44:47 -- common/autotest_common.sh@643 -- # es=1 00:19:21.198 10:44:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:21.198 10:44:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:21.198 10:44:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:21.198 10:44:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:21.198 10:44:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.456 10:44:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:21.456 10:44:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:21.456 10:44:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:21.456 [2024-07-24 10:44:48.129415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:21.456 [2024-07-24 10:44:48.129580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.456 [2024-07-24 10:44:48.129633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:21.456 [2024-07-24 10:44:48.129687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.456 [2024-07-24 10:44:48.132483] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.456 [2024-07-24 10:44:48.132568] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:21.456 [2024-07-24 10:44:48.132713] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:21.456 [2024-07-24 10:44:48.132813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:21.456 pt1 00:19:21.713 10:44:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:21.713 10:44:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.714 "name": "raid_bdev1", 00:19:21.714 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:21.714 "strip_size_kb": 0, 00:19:21.714 "state": "configuring", 00:19:21.714 "raid_level": "raid1", 00:19:21.714 "superblock": true, 00:19:21.714 "num_base_bdevs": 4, 00:19:21.714 "num_base_bdevs_discovered": 1, 00:19:21.714 "num_base_bdevs_operational": 4, 00:19:21.714 "base_bdevs_list": [ 00:19:21.714 { 00:19:21.714 "name": "pt1", 00:19:21.714 "uuid": "5b656f53-5eb5-5580-97c6-5bda36924ad6", 00:19:21.714 "is_configured": true, 00:19:21.714 "data_offset": 2048, 00:19:21.714 "data_size": 63488 00:19:21.714 }, 00:19:21.714 { 00:19:21.714 "name": null, 00:19:21.714 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:21.714 "is_configured": false, 00:19:21.714 "data_offset": 2048, 00:19:21.714 "data_size": 63488 00:19:21.714 }, 00:19:21.714 { 00:19:21.714 "name": null, 00:19:21.714 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:21.714 "is_configured": false, 00:19:21.714 "data_offset": 2048, 00:19:21.714 "data_size": 63488 00:19:21.714 }, 00:19:21.714 { 00:19:21.714 "name": null, 00:19:21.714 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:21.714 "is_configured": false, 00:19:21.714 "data_offset": 2048, 00:19:21.714 "data_size": 63488 00:19:21.714 } 00:19:21.714 ] 00:19:21.714 }' 00:19:21.714 10:44:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.714 10:44:48 -- common/autotest_common.sh@10 -- # set +x 00:19:22.646 10:44:48 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:22.646 10:44:48 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:22.646 [2024-07-24 10:44:49.245690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:22.646 [2024-07-24 10:44:49.245881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.646 [2024-07-24 10:44:49.245945] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:22.646 [2024-07-24 10:44:49.245977] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.646 [2024-07-24 10:44:49.246561] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.646 [2024-07-24 10:44:49.246636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:22.646 [2024-07-24 10:44:49.246761] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:22.646 [2024-07-24 10:44:49.246794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.646 pt2 00:19:22.646 10:44:49 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:22.904 [2024-07-24 10:44:49.513794] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.904 10:44:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.162 10:44:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.162 "name": "raid_bdev1", 00:19:23.162 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:23.162 "strip_size_kb": 0, 00:19:23.162 "state": "configuring", 00:19:23.162 "raid_level": "raid1", 00:19:23.162 "superblock": true, 00:19:23.162 "num_base_bdevs": 4, 00:19:23.162 "num_base_bdevs_discovered": 1, 00:19:23.162 "num_base_bdevs_operational": 4, 00:19:23.162 "base_bdevs_list": [ 00:19:23.162 { 00:19:23.162 "name": "pt1", 00:19:23.162 "uuid": "5b656f53-5eb5-5580-97c6-5bda36924ad6", 00:19:23.162 "is_configured": true, 00:19:23.162 "data_offset": 2048, 00:19:23.162 "data_size": 63488 00:19:23.162 }, 00:19:23.162 { 00:19:23.162 "name": null, 00:19:23.162 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:23.162 "is_configured": false, 00:19:23.162 "data_offset": 2048, 00:19:23.162 "data_size": 63488 00:19:23.162 }, 00:19:23.162 { 00:19:23.162 "name": null, 00:19:23.162 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:23.162 "is_configured": false, 00:19:23.162 "data_offset": 2048, 00:19:23.162 "data_size": 63488 00:19:23.162 }, 00:19:23.162 { 00:19:23.162 "name": null, 00:19:23.162 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:23.162 "is_configured": false, 00:19:23.162 "data_offset": 2048, 00:19:23.162 "data_size": 63488 00:19:23.162 } 00:19:23.162 ] 00:19:23.162 }' 00:19:23.162 10:44:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.162 10:44:49 -- common/autotest_common.sh@10 -- # set +x 00:19:24.096 10:44:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:24.096 10:44:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:24.096 10:44:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:24.096 [2024-07-24 10:44:50.677965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:24.096 [2024-07-24 10:44:50.678097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.096 [2024-07-24 10:44:50.678153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:24.096 [2024-07-24 10:44:50.678185] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.096 [2024-07-24 10:44:50.678765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.096 [2024-07-24 10:44:50.678835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:24.096 [2024-07-24 10:44:50.678944] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:24.096 [2024-07-24 10:44:50.678976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.096 pt2 00:19:24.096 10:44:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:24.096 10:44:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:24.096 10:44:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:24.354 [2024-07-24 10:44:50.910080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:24.354 [2024-07-24 10:44:50.910230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.354 [2024-07-24 10:44:50.910281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:24.354 [2024-07-24 10:44:50.910323] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.354 [2024-07-24 10:44:50.910961] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.354 [2024-07-24 10:44:50.911051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:24.354 [2024-07-24 10:44:50.911186] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:24.354 [2024-07-24 10:44:50.911218] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:24.354 pt3 00:19:24.354 10:44:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:24.354 10:44:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:24.354 10:44:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:24.613 [2024-07-24 10:44:51.174126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:24.613 [2024-07-24 10:44:51.174296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.613 [2024-07-24 10:44:51.174349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:24.613 [2024-07-24 10:44:51.174387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.613 [2024-07-24 10:44:51.174949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.613 [2024-07-24 10:44:51.175019] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:24.613 [2024-07-24 10:44:51.175126] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:24.613 [2024-07-24 10:44:51.175158] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:24.613 [2024-07-24 10:44:51.175365] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:24.613 [2024-07-24 10:44:51.175382] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:24.613 [2024-07-24 10:44:51.175482] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:19:24.613 [2024-07-24 10:44:51.175927] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:24.613 [2024-07-24 10:44:51.175955] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:24.613 [2024-07-24 10:44:51.176077] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:24.613 pt4 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.613 10:44:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.871 10:44:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.871 "name": "raid_bdev1", 00:19:24.871 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:24.871 "strip_size_kb": 0, 00:19:24.871 "state": "online", 00:19:24.871 "raid_level": "raid1", 00:19:24.871 "superblock": true, 00:19:24.871 "num_base_bdevs": 4, 00:19:24.871 "num_base_bdevs_discovered": 4, 00:19:24.871 "num_base_bdevs_operational": 4, 00:19:24.871 "base_bdevs_list": [ 00:19:24.871 { 00:19:24.871 "name": "pt1", 00:19:24.871 "uuid": "5b656f53-5eb5-5580-97c6-5bda36924ad6", 00:19:24.871 "is_configured": true, 00:19:24.871 "data_offset": 2048, 00:19:24.871 "data_size": 63488 00:19:24.871 }, 00:19:24.871 { 00:19:24.871 "name": "pt2", 00:19:24.871 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:24.871 "is_configured": true, 00:19:24.871 "data_offset": 2048, 00:19:24.871 "data_size": 63488 00:19:24.871 }, 00:19:24.871 { 00:19:24.871 "name": "pt3", 00:19:24.871 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:24.871 "is_configured": true, 00:19:24.871 "data_offset": 2048, 00:19:24.871 "data_size": 63488 00:19:24.871 }, 00:19:24.871 { 00:19:24.871 "name": "pt4", 00:19:24.871 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:24.871 "is_configured": true, 00:19:24.871 "data_offset": 2048, 00:19:24.871 "data_size": 63488 00:19:24.871 } 00:19:24.871 ] 00:19:24.871 }' 00:19:24.871 10:44:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.871 10:44:51 -- common/autotest_common.sh@10 -- # set +x 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:25.806 [2024-07-24 10:44:52.378759] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@430 -- # '[' eceeb304-d251-4e36-ba16-02f783943734 '!=' eceeb304-d251-4e36-ba16-02f783943734 ']' 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:25.806 10:44:52 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:26.064 [2024-07-24 10:44:52.614531] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.064 10:44:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.322 10:44:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.322 "name": "raid_bdev1", 00:19:26.322 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:26.322 "strip_size_kb": 0, 00:19:26.322 "state": "online", 00:19:26.322 "raid_level": "raid1", 00:19:26.322 "superblock": true, 00:19:26.322 "num_base_bdevs": 4, 00:19:26.322 "num_base_bdevs_discovered": 3, 00:19:26.322 "num_base_bdevs_operational": 3, 00:19:26.322 "base_bdevs_list": [ 00:19:26.322 { 00:19:26.322 "name": null, 00:19:26.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.322 "is_configured": false, 00:19:26.322 "data_offset": 2048, 00:19:26.322 "data_size": 63488 00:19:26.322 }, 00:19:26.322 { 00:19:26.322 "name": "pt2", 00:19:26.322 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:26.322 "is_configured": true, 00:19:26.322 "data_offset": 2048, 00:19:26.322 "data_size": 63488 00:19:26.322 }, 00:19:26.322 { 00:19:26.322 "name": "pt3", 00:19:26.322 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:26.322 "is_configured": true, 00:19:26.322 "data_offset": 2048, 00:19:26.322 "data_size": 63488 00:19:26.323 }, 00:19:26.323 { 00:19:26.323 "name": "pt4", 00:19:26.323 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:26.323 "is_configured": true, 00:19:26.323 "data_offset": 2048, 00:19:26.323 "data_size": 63488 00:19:26.323 } 00:19:26.323 ] 00:19:26.323 }' 00:19:26.323 10:44:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.323 10:44:52 -- common/autotest_common.sh@10 -- # set +x 00:19:27.256 10:44:53 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:27.256 [2024-07-24 10:44:53.896484] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.256 [2024-07-24 10:44:53.896558] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.256 [2024-07-24 10:44:53.896680] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.256 [2024-07-24 10:44:53.896802] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.256 [2024-07-24 10:44:53.896817] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:27.256 10:44:53 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.256 10:44:53 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:27.520 10:44:54 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:27.520 10:44:54 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:27.520 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:27.520 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:27.520 10:44:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:27.798 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:27.798 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:27.798 10:44:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:28.056 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:28.056 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:28.056 10:44:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:28.314 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:28.314 10:44:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:28.314 10:44:54 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:28.314 10:44:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:28.314 10:44:54 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:28.573 [2024-07-24 10:44:55.121043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:28.573 [2024-07-24 10:44:55.121184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.573 [2024-07-24 10:44:55.121226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:28.573 [2024-07-24 10:44:55.121264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.573 [2024-07-24 10:44:55.123885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.573 [2024-07-24 10:44:55.123964] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:28.573 [2024-07-24 10:44:55.124079] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:28.573 [2024-07-24 10:44:55.124128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:28.573 pt2 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.573 10:44:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.831 10:44:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.831 "name": "raid_bdev1", 00:19:28.831 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:28.831 "strip_size_kb": 0, 00:19:28.831 "state": "configuring", 00:19:28.831 "raid_level": "raid1", 00:19:28.831 "superblock": true, 00:19:28.831 "num_base_bdevs": 4, 00:19:28.831 "num_base_bdevs_discovered": 1, 00:19:28.831 "num_base_bdevs_operational": 3, 00:19:28.831 "base_bdevs_list": [ 00:19:28.831 { 00:19:28.831 "name": null, 00:19:28.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.831 "is_configured": false, 00:19:28.831 "data_offset": 2048, 00:19:28.831 "data_size": 63488 00:19:28.831 }, 00:19:28.831 { 00:19:28.831 "name": "pt2", 00:19:28.831 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:28.831 "is_configured": true, 00:19:28.831 "data_offset": 2048, 00:19:28.831 "data_size": 63488 00:19:28.831 }, 00:19:28.831 { 00:19:28.831 "name": null, 00:19:28.831 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:28.831 "is_configured": false, 00:19:28.831 "data_offset": 2048, 00:19:28.831 "data_size": 63488 00:19:28.831 }, 00:19:28.831 { 00:19:28.831 "name": null, 00:19:28.831 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:28.831 "is_configured": false, 00:19:28.831 "data_offset": 2048, 00:19:28.831 "data_size": 63488 00:19:28.831 } 00:19:28.831 ] 00:19:28.831 }' 00:19:28.831 10:44:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.831 10:44:55 -- common/autotest_common.sh@10 -- # set +x 00:19:29.396 10:44:56 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:29.396 10:44:56 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:29.396 10:44:56 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:29.654 [2024-07-24 10:44:56.293316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:29.654 [2024-07-24 10:44:56.293446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.654 [2024-07-24 10:44:56.293498] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:29.654 [2024-07-24 10:44:56.293525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.654 [2024-07-24 10:44:56.294087] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.654 [2024-07-24 10:44:56.294148] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:29.654 [2024-07-24 10:44:56.294248] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:29.654 [2024-07-24 10:44:56.294283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:29.654 pt3 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.654 10:44:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.912 10:44:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.912 "name": "raid_bdev1", 00:19:29.912 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:29.912 "strip_size_kb": 0, 00:19:29.912 "state": "configuring", 00:19:29.912 "raid_level": "raid1", 00:19:29.912 "superblock": true, 00:19:29.912 "num_base_bdevs": 4, 00:19:29.912 "num_base_bdevs_discovered": 2, 00:19:29.912 "num_base_bdevs_operational": 3, 00:19:29.912 "base_bdevs_list": [ 00:19:29.912 { 00:19:29.912 "name": null, 00:19:29.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.912 "is_configured": false, 00:19:29.912 "data_offset": 2048, 00:19:29.912 "data_size": 63488 00:19:29.912 }, 00:19:29.912 { 00:19:29.912 "name": "pt2", 00:19:29.912 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:29.912 "is_configured": true, 00:19:29.912 "data_offset": 2048, 00:19:29.912 "data_size": 63488 00:19:29.912 }, 00:19:29.912 { 00:19:29.912 "name": "pt3", 00:19:29.912 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:29.912 "is_configured": true, 00:19:29.912 "data_offset": 2048, 00:19:29.912 "data_size": 63488 00:19:29.912 }, 00:19:29.912 { 00:19:29.912 "name": null, 00:19:29.912 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:29.912 "is_configured": false, 00:19:29.912 "data_offset": 2048, 00:19:29.912 "data_size": 63488 00:19:29.912 } 00:19:29.912 ] 00:19:29.912 }' 00:19:29.912 10:44:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.912 10:44:56 -- common/autotest_common.sh@10 -- # set +x 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:30.846 [2024-07-24 10:44:57.477558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:30.846 [2024-07-24 10:44:57.477935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.846 [2024-07-24 10:44:57.478123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:30.846 [2024-07-24 10:44:57.478290] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.846 [2024-07-24 10:44:57.479013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.846 [2024-07-24 10:44:57.479211] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:30.846 [2024-07-24 10:44:57.479455] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:30.846 [2024-07-24 10:44:57.479629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:30.846 [2024-07-24 10:44:57.479901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:19:30.846 [2024-07-24 10:44:57.480038] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:30.846 [2024-07-24 10:44:57.480257] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:19:30.846 [2024-07-24 10:44:57.480746] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:19:30.846 [2024-07-24 10:44:57.480879] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:19:30.846 [2024-07-24 10:44:57.481167] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.846 pt4 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.846 10:44:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.423 10:44:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.423 "name": "raid_bdev1", 00:19:31.423 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:31.423 "strip_size_kb": 0, 00:19:31.423 "state": "online", 00:19:31.423 "raid_level": "raid1", 00:19:31.423 "superblock": true, 00:19:31.423 "num_base_bdevs": 4, 00:19:31.423 "num_base_bdevs_discovered": 3, 00:19:31.424 "num_base_bdevs_operational": 3, 00:19:31.424 "base_bdevs_list": [ 00:19:31.424 { 00:19:31.424 "name": null, 00:19:31.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.424 "is_configured": false, 00:19:31.424 "data_offset": 2048, 00:19:31.424 "data_size": 63488 00:19:31.424 }, 00:19:31.424 { 00:19:31.424 "name": "pt2", 00:19:31.424 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:31.424 "is_configured": true, 00:19:31.424 "data_offset": 2048, 00:19:31.424 "data_size": 63488 00:19:31.424 }, 00:19:31.424 { 00:19:31.424 "name": "pt3", 00:19:31.424 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:31.424 "is_configured": true, 00:19:31.424 "data_offset": 2048, 00:19:31.424 "data_size": 63488 00:19:31.424 }, 00:19:31.424 { 00:19:31.424 "name": "pt4", 00:19:31.424 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:31.424 "is_configured": true, 00:19:31.424 "data_offset": 2048, 00:19:31.424 "data_size": 63488 00:19:31.424 } 00:19:31.424 ] 00:19:31.424 }' 00:19:31.424 10:44:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.424 10:44:57 -- common/autotest_common.sh@10 -- # set +x 00:19:32.018 10:44:58 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:32.018 10:44:58 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:32.276 [2024-07-24 10:44:58.797781] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:32.276 [2024-07-24 10:44:58.798034] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:32.276 [2024-07-24 10:44:58.798257] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.276 [2024-07-24 10:44:58.798484] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.276 [2024-07-24 10:44:58.798611] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:19:32.276 10:44:58 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.276 10:44:58 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:32.534 10:44:59 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:32.534 10:44:59 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:32.534 10:44:59 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:32.792 [2024-07-24 10:44:59.429869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:32.792 [2024-07-24 10:44:59.430197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.792 [2024-07-24 10:44:59.430381] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:32.792 [2024-07-24 10:44:59.430525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.792 [2024-07-24 10:44:59.433249] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.792 [2024-07-24 10:44:59.433461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:32.792 [2024-07-24 10:44:59.433697] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:32.792 [2024-07-24 10:44:59.433860] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:32.792 pt1 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.792 10:44:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.050 10:44:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.050 "name": "raid_bdev1", 00:19:33.050 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:33.050 "strip_size_kb": 0, 00:19:33.050 "state": "configuring", 00:19:33.050 "raid_level": "raid1", 00:19:33.050 "superblock": true, 00:19:33.050 "num_base_bdevs": 4, 00:19:33.050 "num_base_bdevs_discovered": 1, 00:19:33.050 "num_base_bdevs_operational": 4, 00:19:33.050 "base_bdevs_list": [ 00:19:33.050 { 00:19:33.050 "name": "pt1", 00:19:33.050 "uuid": "5b656f53-5eb5-5580-97c6-5bda36924ad6", 00:19:33.050 "is_configured": true, 00:19:33.050 "data_offset": 2048, 00:19:33.050 "data_size": 63488 00:19:33.050 }, 00:19:33.050 { 00:19:33.050 "name": null, 00:19:33.050 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:33.050 "is_configured": false, 00:19:33.050 "data_offset": 2048, 00:19:33.050 "data_size": 63488 00:19:33.050 }, 00:19:33.050 { 00:19:33.050 "name": null, 00:19:33.050 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:33.050 "is_configured": false, 00:19:33.050 "data_offset": 2048, 00:19:33.050 "data_size": 63488 00:19:33.050 }, 00:19:33.050 { 00:19:33.050 "name": null, 00:19:33.050 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:33.050 "is_configured": false, 00:19:33.050 "data_offset": 2048, 00:19:33.050 "data_size": 63488 00:19:33.050 } 00:19:33.050 ] 00:19:33.050 }' 00:19:33.050 10:44:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.050 10:44:59 -- common/autotest_common.sh@10 -- # set +x 00:19:33.984 10:45:00 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:33.984 10:45:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:33.984 10:45:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:33.984 10:45:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:33.984 10:45:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:33.984 10:45:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:34.242 10:45:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:34.242 10:45:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:34.243 10:45:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:34.501 10:45:01 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:34.501 10:45:01 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:34.501 10:45:01 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:34.501 10:45:01 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:34.760 [2024-07-24 10:45:01.352305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:34.760 [2024-07-24 10:45:01.352739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.760 [2024-07-24 10:45:01.353002] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:19:34.760 [2024-07-24 10:45:01.353235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.760 [2024-07-24 10:45:01.353970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.760 [2024-07-24 10:45:01.354229] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:34.760 [2024-07-24 10:45:01.354523] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:34.760 [2024-07-24 10:45:01.354719] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:34.760 [2024-07-24 10:45:01.354902] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.760 [2024-07-24 10:45:01.355123] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:19:34.760 [2024-07-24 10:45:01.355392] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:34.760 pt4 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.760 10:45:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.018 10:45:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.018 "name": "raid_bdev1", 00:19:35.018 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:35.018 "strip_size_kb": 0, 00:19:35.018 "state": "configuring", 00:19:35.018 "raid_level": "raid1", 00:19:35.018 "superblock": true, 00:19:35.018 "num_base_bdevs": 4, 00:19:35.018 "num_base_bdevs_discovered": 1, 00:19:35.018 "num_base_bdevs_operational": 3, 00:19:35.018 "base_bdevs_list": [ 00:19:35.018 { 00:19:35.018 "name": null, 00:19:35.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.018 "is_configured": false, 00:19:35.018 "data_offset": 2048, 00:19:35.018 "data_size": 63488 00:19:35.018 }, 00:19:35.018 { 00:19:35.018 "name": null, 00:19:35.018 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:35.018 "is_configured": false, 00:19:35.018 "data_offset": 2048, 00:19:35.018 "data_size": 63488 00:19:35.018 }, 00:19:35.018 { 00:19:35.018 "name": null, 00:19:35.018 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:35.018 "is_configured": false, 00:19:35.018 "data_offset": 2048, 00:19:35.018 "data_size": 63488 00:19:35.018 }, 00:19:35.018 { 00:19:35.018 "name": "pt4", 00:19:35.018 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:35.018 "is_configured": true, 00:19:35.018 "data_offset": 2048, 00:19:35.018 "data_size": 63488 00:19:35.018 } 00:19:35.018 ] 00:19:35.018 }' 00:19:35.018 10:45:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.018 10:45:01 -- common/autotest_common.sh@10 -- # set +x 00:19:35.584 10:45:02 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:35.584 10:45:02 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:35.584 10:45:02 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.842 [2024-07-24 10:45:02.448642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.842 [2024-07-24 10:45:02.449298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.842 [2024-07-24 10:45:02.449552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:35.842 [2024-07-24 10:45:02.449778] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.842 [2024-07-24 10:45:02.450549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.842 [2024-07-24 10:45:02.450807] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.842 [2024-07-24 10:45:02.451105] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:35.842 [2024-07-24 10:45:02.451316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.842 pt2 00:19:35.842 10:45:02 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:35.842 10:45:02 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:35.842 10:45:02 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:36.100 [2024-07-24 10:45:02.692664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:36.100 [2024-07-24 10:45:02.693312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.100 [2024-07-24 10:45:02.693591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:36.100 [2024-07-24 10:45:02.693831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.100 [2024-07-24 10:45:02.694651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.100 [2024-07-24 10:45:02.694924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:36.100 [2024-07-24 10:45:02.695242] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:36.100 [2024-07-24 10:45:02.695469] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:36.100 [2024-07-24 10:45:02.695982] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:19:36.100 [2024-07-24 10:45:02.696199] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:36.100 [2024-07-24 10:45:02.696536] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:19:36.100 [2024-07-24 10:45:02.697198] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:19:36.100 [2024-07-24 10:45:02.697413] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:19:36.100 [2024-07-24 10:45:02.697831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.100 pt3 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.100 10:45:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.359 10:45:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.359 "name": "raid_bdev1", 00:19:36.359 "uuid": "eceeb304-d251-4e36-ba16-02f783943734", 00:19:36.359 "strip_size_kb": 0, 00:19:36.359 "state": "online", 00:19:36.359 "raid_level": "raid1", 00:19:36.359 "superblock": true, 00:19:36.359 "num_base_bdevs": 4, 00:19:36.359 "num_base_bdevs_discovered": 3, 00:19:36.359 "num_base_bdevs_operational": 3, 00:19:36.359 "base_bdevs_list": [ 00:19:36.359 { 00:19:36.359 "name": null, 00:19:36.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.359 "is_configured": false, 00:19:36.359 "data_offset": 2048, 00:19:36.359 "data_size": 63488 00:19:36.359 }, 00:19:36.359 { 00:19:36.359 "name": "pt2", 00:19:36.359 "uuid": "edb7eec5-1514-5972-b00c-9eaa2553d9b3", 00:19:36.359 "is_configured": true, 00:19:36.359 "data_offset": 2048, 00:19:36.359 "data_size": 63488 00:19:36.359 }, 00:19:36.359 { 00:19:36.359 "name": "pt3", 00:19:36.359 "uuid": "e464adbd-2982-5434-945d-201ece2a57b0", 00:19:36.359 "is_configured": true, 00:19:36.359 "data_offset": 2048, 00:19:36.359 "data_size": 63488 00:19:36.359 }, 00:19:36.359 { 00:19:36.359 "name": "pt4", 00:19:36.359 "uuid": "c01e2079-569d-5854-9d59-3fc327b8c003", 00:19:36.359 "is_configured": true, 00:19:36.359 "data_offset": 2048, 00:19:36.359 "data_size": 63488 00:19:36.359 } 00:19:36.359 ] 00:19:36.359 }' 00:19:36.359 10:45:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.359 10:45:02 -- common/autotest_common.sh@10 -- # set +x 00:19:36.925 10:45:03 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:36.925 10:45:03 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:37.184 [2024-07-24 10:45:03.826351] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.184 10:45:03 -- bdev/bdev_raid.sh@506 -- # '[' eceeb304-d251-4e36-ba16-02f783943734 '!=' eceeb304-d251-4e36-ba16-02f783943734 ']' 00:19:37.184 10:45:03 -- bdev/bdev_raid.sh@511 -- # killprocess 132631 00:19:37.184 10:45:03 -- common/autotest_common.sh@926 -- # '[' -z 132631 ']' 00:19:37.184 10:45:03 -- common/autotest_common.sh@930 -- # kill -0 132631 00:19:37.184 10:45:03 -- common/autotest_common.sh@931 -- # uname 00:19:37.184 10:45:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.184 10:45:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132631 00:19:37.184 10:45:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:37.184 10:45:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:37.184 10:45:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132631' 00:19:37.184 killing process with pid 132631 00:19:37.184 10:45:03 -- common/autotest_common.sh@945 -- # kill 132631 00:19:37.184 10:45:03 -- common/autotest_common.sh@950 -- # wait 132631 00:19:37.184 [2024-07-24 10:45:03.870851] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.442 [2024-07-24 10:45:03.871349] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.442 [2024-07-24 10:45:03.871691] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.442 [2024-07-24 10:45:03.871919] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:19:37.442 [2024-07-24 10:45:03.934272] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:37.700 00:19:37.700 real 0m23.093s 00:19:37.700 user 0m43.139s 00:19:37.700 sys 0m2.916s 00:19:37.700 10:45:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:37.700 10:45:04 -- common/autotest_common.sh@10 -- # set +x 00:19:37.700 ************************************ 00:19:37.700 END TEST raid_superblock_test 00:19:37.700 ************************************ 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:37.700 10:45:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:19:37.700 10:45:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:37.700 10:45:04 -- common/autotest_common.sh@10 -- # set +x 00:19:37.700 ************************************ 00:19:37.700 START TEST raid_rebuild_test 00:19:37.700 ************************************ 00:19:37.700 10:45:04 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=133316 00:19:37.700 10:45:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:37.701 10:45:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133316 /var/tmp/spdk-raid.sock 00:19:37.701 10:45:04 -- common/autotest_common.sh@819 -- # '[' -z 133316 ']' 00:19:37.701 10:45:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:37.701 10:45:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:37.701 10:45:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:37.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:37.701 10:45:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:37.701 10:45:04 -- common/autotest_common.sh@10 -- # set +x 00:19:37.959 [2024-07-24 10:45:04.411954] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:37.959 [2024-07-24 10:45:04.413297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133316 ] 00:19:37.959 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:37.959 Zero copy mechanism will not be used. 00:19:37.959 [2024-07-24 10:45:04.568576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.217 [2024-07-24 10:45:04.693954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.217 [2024-07-24 10:45:04.767758] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:38.784 10:45:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:38.784 10:45:05 -- common/autotest_common.sh@852 -- # return 0 00:19:38.784 10:45:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:38.784 10:45:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:38.784 10:45:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:39.042 BaseBdev1 00:19:39.042 10:45:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:39.042 10:45:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:39.042 10:45:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:39.607 BaseBdev2 00:19:39.607 10:45:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:39.607 spare_malloc 00:19:39.869 10:45:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:39.869 spare_delay 00:19:39.869 10:45:06 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:40.136 [2024-07-24 10:45:06.753487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.136 [2024-07-24 10:45:06.754069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.136 [2024-07-24 10:45:06.754239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:40.136 [2024-07-24 10:45:06.754398] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.136 [2024-07-24 10:45:06.757596] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.136 [2024-07-24 10:45:06.757826] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.136 spare 00:19:40.136 10:45:06 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:40.394 [2024-07-24 10:45:06.982402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.394 [2024-07-24 10:45:06.985170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.394 [2024-07-24 10:45:06.985470] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:40.394 [2024-07-24 10:45:06.985597] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:40.394 [2024-07-24 10:45:06.985893] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:19:40.394 [2024-07-24 10:45:06.986553] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:40.394 [2024-07-24 10:45:06.986678] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:19:40.394 [2024-07-24 10:45:06.987062] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.394 10:45:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.394 10:45:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.394 10:45:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.394 10:45:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.394 10:45:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.652 10:45:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.652 "name": "raid_bdev1", 00:19:40.652 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:40.652 "strip_size_kb": 0, 00:19:40.652 "state": "online", 00:19:40.652 "raid_level": "raid1", 00:19:40.652 "superblock": false, 00:19:40.652 "num_base_bdevs": 2, 00:19:40.652 "num_base_bdevs_discovered": 2, 00:19:40.652 "num_base_bdevs_operational": 2, 00:19:40.652 "base_bdevs_list": [ 00:19:40.652 { 00:19:40.652 "name": "BaseBdev1", 00:19:40.652 "uuid": "c02dee35-7e0d-40ac-ad39-475fa8d10c00", 00:19:40.652 "is_configured": true, 00:19:40.652 "data_offset": 0, 00:19:40.652 "data_size": 65536 00:19:40.652 }, 00:19:40.652 { 00:19:40.652 "name": "BaseBdev2", 00:19:40.652 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:40.652 "is_configured": true, 00:19:40.652 "data_offset": 0, 00:19:40.652 "data_size": 65536 00:19:40.652 } 00:19:40.652 ] 00:19:40.652 }' 00:19:40.652 10:45:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.652 10:45:07 -- common/autotest_common.sh@10 -- # set +x 00:19:41.217 10:45:07 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:41.217 10:45:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:41.475 [2024-07-24 10:45:08.063441] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.475 10:45:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:41.475 10:45:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.475 10:45:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:41.733 10:45:08 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:41.733 10:45:08 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:41.733 10:45:08 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:41.733 10:45:08 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@12 -- # local i 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.733 10:45:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:41.991 [2024-07-24 10:45:08.567389] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:19:41.991 /dev/nbd0 00:19:41.991 10:45:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:41.991 10:45:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:41.991 10:45:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:41.991 10:45:08 -- common/autotest_common.sh@857 -- # local i 00:19:41.991 10:45:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:41.991 10:45:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:41.991 10:45:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:41.991 10:45:08 -- common/autotest_common.sh@861 -- # break 00:19:41.991 10:45:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:41.991 10:45:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:41.991 10:45:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.991 1+0 records in 00:19:41.991 1+0 records out 00:19:41.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268829 s, 15.2 MB/s 00:19:41.991 10:45:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.991 10:45:08 -- common/autotest_common.sh@874 -- # size=4096 00:19:41.991 10:45:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.991 10:45:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:41.991 10:45:08 -- common/autotest_common.sh@877 -- # return 0 00:19:41.991 10:45:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.991 10:45:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.991 10:45:08 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:41.991 10:45:08 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:41.991 10:45:08 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:47.250 65536+0 records in 00:19:47.250 65536+0 records out 00:19:47.250 33554432 bytes (34 MB, 32 MiB) copied, 5.06784 s, 6.6 MB/s 00:19:47.250 10:45:13 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:47.250 10:45:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:47.250 10:45:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:47.250 10:45:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:47.250 10:45:13 -- bdev/nbd_common.sh@51 -- # local i 00:19:47.250 10:45:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:47.250 10:45:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:47.508 [2024-07-24 10:45:13.990350] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@41 -- # break 00:19:47.508 10:45:13 -- bdev/nbd_common.sh@45 -- # return 0 00:19:47.508 10:45:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:47.767 [2024-07-24 10:45:14.226046] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.767 10:45:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.026 10:45:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.026 "name": "raid_bdev1", 00:19:48.026 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:48.026 "strip_size_kb": 0, 00:19:48.026 "state": "online", 00:19:48.026 "raid_level": "raid1", 00:19:48.026 "superblock": false, 00:19:48.026 "num_base_bdevs": 2, 00:19:48.026 "num_base_bdevs_discovered": 1, 00:19:48.026 "num_base_bdevs_operational": 1, 00:19:48.026 "base_bdevs_list": [ 00:19:48.026 { 00:19:48.026 "name": null, 00:19:48.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.026 "is_configured": false, 00:19:48.026 "data_offset": 0, 00:19:48.026 "data_size": 65536 00:19:48.026 }, 00:19:48.026 { 00:19:48.026 "name": "BaseBdev2", 00:19:48.026 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:48.026 "is_configured": true, 00:19:48.026 "data_offset": 0, 00:19:48.026 "data_size": 65536 00:19:48.026 } 00:19:48.026 ] 00:19:48.026 }' 00:19:48.026 10:45:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.026 10:45:14 -- common/autotest_common.sh@10 -- # set +x 00:19:48.593 10:45:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.851 [2024-07-24 10:45:15.322418] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:48.851 [2024-07-24 10:45:15.322518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.852 [2024-07-24 10:45:15.330012] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d05ee0 00:19:48.852 [2024-07-24 10:45:15.332393] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.852 10:45:15 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.787 10:45:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.045 10:45:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:50.045 "name": "raid_bdev1", 00:19:50.045 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:50.045 "strip_size_kb": 0, 00:19:50.045 "state": "online", 00:19:50.045 "raid_level": "raid1", 00:19:50.045 "superblock": false, 00:19:50.045 "num_base_bdevs": 2, 00:19:50.045 "num_base_bdevs_discovered": 2, 00:19:50.045 "num_base_bdevs_operational": 2, 00:19:50.045 "process": { 00:19:50.045 "type": "rebuild", 00:19:50.045 "target": "spare", 00:19:50.045 "progress": { 00:19:50.045 "blocks": 24576, 00:19:50.045 "percent": 37 00:19:50.045 } 00:19:50.045 }, 00:19:50.045 "base_bdevs_list": [ 00:19:50.045 { 00:19:50.045 "name": "spare", 00:19:50.045 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:50.045 "is_configured": true, 00:19:50.045 "data_offset": 0, 00:19:50.045 "data_size": 65536 00:19:50.045 }, 00:19:50.045 { 00:19:50.045 "name": "BaseBdev2", 00:19:50.045 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:50.045 "is_configured": true, 00:19:50.045 "data_offset": 0, 00:19:50.045 "data_size": 65536 00:19:50.045 } 00:19:50.045 ] 00:19:50.045 }' 00:19:50.045 10:45:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:50.045 10:45:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.045 10:45:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:50.304 10:45:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.304 10:45:16 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:50.562 [2024-07-24 10:45:17.014889] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.562 [2024-07-24 10:45:17.046879] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:50.562 [2024-07-24 10:45:17.047022] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.562 10:45:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.562 10:45:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:50.562 10:45:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.563 10:45:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.821 10:45:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.821 "name": "raid_bdev1", 00:19:50.821 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:50.821 "strip_size_kb": 0, 00:19:50.821 "state": "online", 00:19:50.821 "raid_level": "raid1", 00:19:50.821 "superblock": false, 00:19:50.821 "num_base_bdevs": 2, 00:19:50.821 "num_base_bdevs_discovered": 1, 00:19:50.821 "num_base_bdevs_operational": 1, 00:19:50.821 "base_bdevs_list": [ 00:19:50.821 { 00:19:50.821 "name": null, 00:19:50.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.821 "is_configured": false, 00:19:50.821 "data_offset": 0, 00:19:50.821 "data_size": 65536 00:19:50.821 }, 00:19:50.821 { 00:19:50.821 "name": "BaseBdev2", 00:19:50.821 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:50.821 "is_configured": true, 00:19:50.821 "data_offset": 0, 00:19:50.821 "data_size": 65536 00:19:50.821 } 00:19:50.821 ] 00:19:50.821 }' 00:19:50.821 10:45:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.821 10:45:17 -- common/autotest_common.sh@10 -- # set +x 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.386 10:45:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.644 10:45:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:51.644 "name": "raid_bdev1", 00:19:51.644 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:51.644 "strip_size_kb": 0, 00:19:51.644 "state": "online", 00:19:51.644 "raid_level": "raid1", 00:19:51.644 "superblock": false, 00:19:51.644 "num_base_bdevs": 2, 00:19:51.644 "num_base_bdevs_discovered": 1, 00:19:51.644 "num_base_bdevs_operational": 1, 00:19:51.644 "base_bdevs_list": [ 00:19:51.644 { 00:19:51.644 "name": null, 00:19:51.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.644 "is_configured": false, 00:19:51.644 "data_offset": 0, 00:19:51.644 "data_size": 65536 00:19:51.644 }, 00:19:51.644 { 00:19:51.644 "name": "BaseBdev2", 00:19:51.644 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:51.644 "is_configured": true, 00:19:51.644 "data_offset": 0, 00:19:51.644 "data_size": 65536 00:19:51.644 } 00:19:51.644 ] 00:19:51.644 }' 00:19:51.644 10:45:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:51.644 10:45:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:51.644 10:45:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:51.901 10:45:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:51.901 10:45:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.901 [2024-07-24 10:45:18.587381] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:51.901 [2024-07-24 10:45:18.587476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.159 [2024-07-24 10:45:18.594818] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:19:52.159 [2024-07-24 10:45:18.597309] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.159 10:45:18 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.101 10:45:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.364 10:45:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:53.364 "name": "raid_bdev1", 00:19:53.364 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:53.364 "strip_size_kb": 0, 00:19:53.364 "state": "online", 00:19:53.364 "raid_level": "raid1", 00:19:53.364 "superblock": false, 00:19:53.364 "num_base_bdevs": 2, 00:19:53.364 "num_base_bdevs_discovered": 2, 00:19:53.364 "num_base_bdevs_operational": 2, 00:19:53.364 "process": { 00:19:53.364 "type": "rebuild", 00:19:53.364 "target": "spare", 00:19:53.364 "progress": { 00:19:53.364 "blocks": 24576, 00:19:53.364 "percent": 37 00:19:53.364 } 00:19:53.364 }, 00:19:53.364 "base_bdevs_list": [ 00:19:53.364 { 00:19:53.364 "name": "spare", 00:19:53.365 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:53.365 "is_configured": true, 00:19:53.365 "data_offset": 0, 00:19:53.365 "data_size": 65536 00:19:53.365 }, 00:19:53.365 { 00:19:53.365 "name": "BaseBdev2", 00:19:53.365 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:53.365 "is_configured": true, 00:19:53.365 "data_offset": 0, 00:19:53.365 "data_size": 65536 00:19:53.365 } 00:19:53.365 ] 00:19:53.365 }' 00:19:53.365 10:45:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:53.365 10:45:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.365 10:45:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@657 -- # local timeout=400 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.365 10:45:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.622 10:45:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:53.622 "name": "raid_bdev1", 00:19:53.622 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:53.622 "strip_size_kb": 0, 00:19:53.622 "state": "online", 00:19:53.622 "raid_level": "raid1", 00:19:53.622 "superblock": false, 00:19:53.622 "num_base_bdevs": 2, 00:19:53.622 "num_base_bdevs_discovered": 2, 00:19:53.622 "num_base_bdevs_operational": 2, 00:19:53.622 "process": { 00:19:53.622 "type": "rebuild", 00:19:53.622 "target": "spare", 00:19:53.622 "progress": { 00:19:53.622 "blocks": 32768, 00:19:53.622 "percent": 50 00:19:53.622 } 00:19:53.622 }, 00:19:53.622 "base_bdevs_list": [ 00:19:53.622 { 00:19:53.622 "name": "spare", 00:19:53.622 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:53.622 "is_configured": true, 00:19:53.622 "data_offset": 0, 00:19:53.622 "data_size": 65536 00:19:53.622 }, 00:19:53.622 { 00:19:53.622 "name": "BaseBdev2", 00:19:53.622 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:53.622 "is_configured": true, 00:19:53.622 "data_offset": 0, 00:19:53.622 "data_size": 65536 00:19:53.622 } 00:19:53.622 ] 00:19:53.622 }' 00:19:53.622 10:45:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:53.880 10:45:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.880 10:45:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.880 10:45:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.880 10:45:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.813 10:45:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.070 10:45:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.070 "name": "raid_bdev1", 00:19:55.070 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:55.070 "strip_size_kb": 0, 00:19:55.070 "state": "online", 00:19:55.070 "raid_level": "raid1", 00:19:55.070 "superblock": false, 00:19:55.070 "num_base_bdevs": 2, 00:19:55.070 "num_base_bdevs_discovered": 2, 00:19:55.070 "num_base_bdevs_operational": 2, 00:19:55.070 "process": { 00:19:55.070 "type": "rebuild", 00:19:55.070 "target": "spare", 00:19:55.070 "progress": { 00:19:55.070 "blocks": 61440, 00:19:55.070 "percent": 93 00:19:55.070 } 00:19:55.070 }, 00:19:55.070 "base_bdevs_list": [ 00:19:55.070 { 00:19:55.070 "name": "spare", 00:19:55.070 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:55.070 "is_configured": true, 00:19:55.070 "data_offset": 0, 00:19:55.070 "data_size": 65536 00:19:55.070 }, 00:19:55.070 { 00:19:55.070 "name": "BaseBdev2", 00:19:55.070 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:55.070 "is_configured": true, 00:19:55.070 "data_offset": 0, 00:19:55.070 "data_size": 65536 00:19:55.070 } 00:19:55.070 ] 00:19:55.071 }' 00:19:55.071 10:45:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.071 10:45:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.071 10:45:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.328 10:45:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.328 10:45:21 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:55.328 [2024-07-24 10:45:21.821713] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:55.328 [2024-07-24 10:45:21.821827] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:55.328 [2024-07-24 10:45:21.821945] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.262 10:45:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.520 "name": "raid_bdev1", 00:19:56.520 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:56.520 "strip_size_kb": 0, 00:19:56.520 "state": "online", 00:19:56.520 "raid_level": "raid1", 00:19:56.520 "superblock": false, 00:19:56.520 "num_base_bdevs": 2, 00:19:56.520 "num_base_bdevs_discovered": 2, 00:19:56.520 "num_base_bdevs_operational": 2, 00:19:56.520 "base_bdevs_list": [ 00:19:56.520 { 00:19:56.520 "name": "spare", 00:19:56.520 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:56.520 "is_configured": true, 00:19:56.520 "data_offset": 0, 00:19:56.520 "data_size": 65536 00:19:56.520 }, 00:19:56.520 { 00:19:56.520 "name": "BaseBdev2", 00:19:56.520 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:56.520 "is_configured": true, 00:19:56.520 "data_offset": 0, 00:19:56.520 "data_size": 65536 00:19:56.520 } 00:19:56.520 ] 00:19:56.520 }' 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@660 -- # break 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.520 10:45:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.779 10:45:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.779 "name": "raid_bdev1", 00:19:56.779 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:56.779 "strip_size_kb": 0, 00:19:56.779 "state": "online", 00:19:56.779 "raid_level": "raid1", 00:19:56.779 "superblock": false, 00:19:56.779 "num_base_bdevs": 2, 00:19:56.779 "num_base_bdevs_discovered": 2, 00:19:56.779 "num_base_bdevs_operational": 2, 00:19:56.779 "base_bdevs_list": [ 00:19:56.779 { 00:19:56.779 "name": "spare", 00:19:56.779 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:56.779 "is_configured": true, 00:19:56.779 "data_offset": 0, 00:19:56.779 "data_size": 65536 00:19:56.779 }, 00:19:56.779 { 00:19:56.779 "name": "BaseBdev2", 00:19:56.779 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:56.779 "is_configured": true, 00:19:56.779 "data_offset": 0, 00:19:56.779 "data_size": 65536 00:19:56.779 } 00:19:56.779 ] 00:19:56.779 }' 00:19:56.779 10:45:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.779 10:45:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:56.779 10:45:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:57.036 10:45:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:57.036 10:45:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.036 10:45:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.037 10:45:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.295 10:45:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.295 "name": "raid_bdev1", 00:19:57.295 "uuid": "3f3ebb06-a033-40fa-8921-a8ee33e00c52", 00:19:57.295 "strip_size_kb": 0, 00:19:57.295 "state": "online", 00:19:57.295 "raid_level": "raid1", 00:19:57.295 "superblock": false, 00:19:57.295 "num_base_bdevs": 2, 00:19:57.295 "num_base_bdevs_discovered": 2, 00:19:57.295 "num_base_bdevs_operational": 2, 00:19:57.295 "base_bdevs_list": [ 00:19:57.295 { 00:19:57.295 "name": "spare", 00:19:57.295 "uuid": "2aee6935-d6ac-59f2-867d-a738bfe497ec", 00:19:57.295 "is_configured": true, 00:19:57.295 "data_offset": 0, 00:19:57.295 "data_size": 65536 00:19:57.295 }, 00:19:57.295 { 00:19:57.295 "name": "BaseBdev2", 00:19:57.295 "uuid": "2e5b831d-8fca-461b-a2eb-9aff0d8ef004", 00:19:57.295 "is_configured": true, 00:19:57.295 "data_offset": 0, 00:19:57.295 "data_size": 65536 00:19:57.295 } 00:19:57.295 ] 00:19:57.295 }' 00:19:57.295 10:45:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.295 10:45:23 -- common/autotest_common.sh@10 -- # set +x 00:19:57.860 10:45:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:58.118 [2024-07-24 10:45:24.670332] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:58.118 [2024-07-24 10:45:24.670410] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:58.118 [2024-07-24 10:45:24.670619] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.118 [2024-07-24 10:45:24.670741] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.118 [2024-07-24 10:45:24.670760] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:19:58.118 10:45:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:58.118 10:45:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.376 10:45:24 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:58.376 10:45:24 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:58.376 10:45:24 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@12 -- # local i 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.376 10:45:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:58.633 /dev/nbd0 00:19:58.633 10:45:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.633 10:45:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.633 10:45:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:19:58.633 10:45:25 -- common/autotest_common.sh@857 -- # local i 00:19:58.633 10:45:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:58.633 10:45:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:58.633 10:45:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:19:58.633 10:45:25 -- common/autotest_common.sh@861 -- # break 00:19:58.633 10:45:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:58.633 10:45:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:58.633 10:45:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.633 1+0 records in 00:19:58.633 1+0 records out 00:19:58.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617797 s, 6.6 MB/s 00:19:58.633 10:45:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.633 10:45:25 -- common/autotest_common.sh@874 -- # size=4096 00:19:58.633 10:45:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.633 10:45:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:58.633 10:45:25 -- common/autotest_common.sh@877 -- # return 0 00:19:58.633 10:45:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.633 10:45:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.633 10:45:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:58.891 /dev/nbd1 00:19:58.891 10:45:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:58.891 10:45:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:58.891 10:45:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:19:58.891 10:45:25 -- common/autotest_common.sh@857 -- # local i 00:19:58.891 10:45:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:19:58.891 10:45:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:19:58.891 10:45:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:19:58.891 10:45:25 -- common/autotest_common.sh@861 -- # break 00:19:58.891 10:45:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:19:58.891 10:45:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:19:58.891 10:45:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.891 1+0 records in 00:19:58.891 1+0 records out 00:19:58.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525142 s, 7.8 MB/s 00:19:58.891 10:45:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.891 10:45:25 -- common/autotest_common.sh@874 -- # size=4096 00:19:58.891 10:45:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.891 10:45:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:19:58.891 10:45:25 -- common/autotest_common.sh@877 -- # return 0 00:19:58.891 10:45:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.891 10:45:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:58.891 10:45:25 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:59.149 10:45:25 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:59.149 10:45:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:59.149 10:45:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:59.149 10:45:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.149 10:45:25 -- bdev/nbd_common.sh@51 -- # local i 00:19:59.149 10:45:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.149 10:45:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@41 -- # break 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.421 10:45:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@41 -- # break 00:19:59.680 10:45:26 -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.680 10:45:26 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:59.680 10:45:26 -- bdev/bdev_raid.sh@709 -- # killprocess 133316 00:19:59.680 10:45:26 -- common/autotest_common.sh@926 -- # '[' -z 133316 ']' 00:19:59.680 10:45:26 -- common/autotest_common.sh@930 -- # kill -0 133316 00:19:59.680 10:45:26 -- common/autotest_common.sh@931 -- # uname 00:19:59.680 10:45:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:59.680 10:45:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133316 00:19:59.680 10:45:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:59.680 10:45:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:59.680 10:45:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133316' 00:19:59.680 killing process with pid 133316 00:19:59.680 10:45:26 -- common/autotest_common.sh@945 -- # kill 133316 00:19:59.680 Received shutdown signal, test time was about 60.000000 seconds 00:19:59.680 00:19:59.680 Latency(us) 00:19:59.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.680 =================================================================================================================== 00:19:59.680 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.680 [2024-07-24 10:45:26.259893] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.680 10:45:26 -- common/autotest_common.sh@950 -- # wait 133316 00:19:59.680 [2024-07-24 10:45:26.304411] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.247 ************************************ 00:20:00.247 END TEST raid_rebuild_test 00:20:00.247 ************************************ 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:00.247 00:20:00.247 real 0m22.324s 00:20:00.247 user 0m31.493s 00:20:00.247 sys 0m4.033s 00:20:00.247 10:45:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.247 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:00.247 10:45:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:00.247 10:45:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:00.247 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:00.247 ************************************ 00:20:00.247 START TEST raid_rebuild_test_sb 00:20:00.247 ************************************ 00:20:00.247 10:45:26 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=133864 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133864 /var/tmp/spdk-raid.sock 00:20:00.247 10:45:26 -- common/autotest_common.sh@819 -- # '[' -z 133864 ']' 00:20:00.247 10:45:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.247 10:45:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.247 10:45:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:00.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.247 10:45:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.247 10:45:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.247 10:45:26 -- common/autotest_common.sh@10 -- # set +x 00:20:00.247 [2024-07-24 10:45:26.786036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:00.247 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.247 Zero copy mechanism will not be used. 00:20:00.247 [2024-07-24 10:45:26.786317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133864 ] 00:20:00.247 [2024-07-24 10:45:26.929937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.506 [2024-07-24 10:45:27.058053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.506 [2024-07-24 10:45:27.132934] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.073 10:45:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.073 10:45:27 -- common/autotest_common.sh@852 -- # return 0 00:20:01.073 10:45:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.073 10:45:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:01.073 10:45:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.331 BaseBdev1_malloc 00:20:01.331 10:45:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:01.589 [2024-07-24 10:45:28.225338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:01.590 [2024-07-24 10:45:28.225488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.590 [2024-07-24 10:45:28.225534] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:01.590 [2024-07-24 10:45:28.225587] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.590 [2024-07-24 10:45:28.228620] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.590 [2024-07-24 10:45:28.228690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.590 BaseBdev1 00:20:01.590 10:45:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.590 10:45:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:01.590 10:45:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:01.848 BaseBdev2_malloc 00:20:01.848 10:45:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:02.106 [2024-07-24 10:45:28.744018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:02.106 [2024-07-24 10:45:28.744139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.106 [2024-07-24 10:45:28.744187] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:02.106 [2024-07-24 10:45:28.744235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.106 [2024-07-24 10:45:28.746783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.106 [2024-07-24 10:45:28.746851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.106 BaseBdev2 00:20:02.106 10:45:28 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:02.364 spare_malloc 00:20:02.364 10:45:29 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:02.622 spare_delay 00:20:02.880 10:45:29 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:02.880 [2024-07-24 10:45:29.525823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:02.880 [2024-07-24 10:45:29.525970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.880 [2024-07-24 10:45:29.526024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:02.880 [2024-07-24 10:45:29.526074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.880 [2024-07-24 10:45:29.529056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.880 [2024-07-24 10:45:29.529117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:02.880 spare 00:20:02.880 10:45:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:03.139 [2024-07-24 10:45:29.766036] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.139 [2024-07-24 10:45:29.768605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.139 [2024-07-24 10:45:29.768854] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:20:03.139 [2024-07-24 10:45:29.768871] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:03.139 [2024-07-24 10:45:29.769073] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:03.139 [2024-07-24 10:45:29.769556] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:20:03.139 [2024-07-24 10:45:29.769580] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:20:03.139 [2024-07-24 10:45:29.769816] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.139 10:45:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.397 10:45:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.397 "name": "raid_bdev1", 00:20:03.397 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:03.397 "strip_size_kb": 0, 00:20:03.397 "state": "online", 00:20:03.397 "raid_level": "raid1", 00:20:03.397 "superblock": true, 00:20:03.397 "num_base_bdevs": 2, 00:20:03.397 "num_base_bdevs_discovered": 2, 00:20:03.397 "num_base_bdevs_operational": 2, 00:20:03.397 "base_bdevs_list": [ 00:20:03.397 { 00:20:03.397 "name": "BaseBdev1", 00:20:03.397 "uuid": "849ddb61-73ac-5125-95e1-99be5eec702e", 00:20:03.397 "is_configured": true, 00:20:03.397 "data_offset": 2048, 00:20:03.397 "data_size": 63488 00:20:03.397 }, 00:20:03.397 { 00:20:03.397 "name": "BaseBdev2", 00:20:03.397 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:03.397 "is_configured": true, 00:20:03.397 "data_offset": 2048, 00:20:03.397 "data_size": 63488 00:20:03.397 } 00:20:03.397 ] 00:20:03.397 }' 00:20:03.397 10:45:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.397 10:45:30 -- common/autotest_common.sh@10 -- # set +x 00:20:04.330 10:45:30 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:04.330 10:45:30 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.330 [2024-07-24 10:45:30.870363] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.330 10:45:30 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:04.330 10:45:30 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.330 10:45:30 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:04.589 10:45:31 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:04.589 10:45:31 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:04.589 10:45:31 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:04.589 10:45:31 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@12 -- # local i 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:04.589 10:45:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:04.847 [2024-07-24 10:45:31.338422] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:04.847 /dev/nbd0 00:20:04.847 10:45:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:04.847 10:45:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:04.847 10:45:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:04.847 10:45:31 -- common/autotest_common.sh@857 -- # local i 00:20:04.847 10:45:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:04.847 10:45:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:04.847 10:45:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:04.847 10:45:31 -- common/autotest_common.sh@861 -- # break 00:20:04.847 10:45:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:04.847 10:45:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:04.847 10:45:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:04.847 1+0 records in 00:20:04.847 1+0 records out 00:20:04.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385417 s, 10.6 MB/s 00:20:04.847 10:45:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:04.847 10:45:31 -- common/autotest_common.sh@874 -- # size=4096 00:20:04.847 10:45:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:04.847 10:45:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:04.847 10:45:31 -- common/autotest_common.sh@877 -- # return 0 00:20:04.847 10:45:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:04.847 10:45:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:04.847 10:45:31 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:04.847 10:45:31 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:04.847 10:45:31 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:11.425 63488+0 records in 00:20:11.425 63488+0 records out 00:20:11.425 32505856 bytes (33 MB, 31 MiB) copied, 5.57435 s, 5.8 MB/s 00:20:11.425 10:45:36 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:11.425 10:45:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:11.425 10:45:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:11.425 10:45:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:11.425 10:45:36 -- bdev/nbd_common.sh@51 -- # local i 00:20:11.425 10:45:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.425 10:45:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:11.425 [2024-07-24 10:45:37.255804] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@41 -- # break 00:20:11.425 10:45:37 -- bdev/nbd_common.sh@45 -- # return 0 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:11.425 [2024-07-24 10:45:37.527037] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:11.425 "name": "raid_bdev1", 00:20:11.425 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:11.425 "strip_size_kb": 0, 00:20:11.425 "state": "online", 00:20:11.425 "raid_level": "raid1", 00:20:11.425 "superblock": true, 00:20:11.425 "num_base_bdevs": 2, 00:20:11.425 "num_base_bdevs_discovered": 1, 00:20:11.425 "num_base_bdevs_operational": 1, 00:20:11.425 "base_bdevs_list": [ 00:20:11.425 { 00:20:11.425 "name": null, 00:20:11.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.425 "is_configured": false, 00:20:11.425 "data_offset": 2048, 00:20:11.425 "data_size": 63488 00:20:11.425 }, 00:20:11.425 { 00:20:11.425 "name": "BaseBdev2", 00:20:11.425 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:11.425 "is_configured": true, 00:20:11.425 "data_offset": 2048, 00:20:11.425 "data_size": 63488 00:20:11.425 } 00:20:11.425 ] 00:20:11.425 }' 00:20:11.425 10:45:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:11.425 10:45:37 -- common/autotest_common.sh@10 -- # set +x 00:20:11.992 10:45:38 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:12.250 [2024-07-24 10:45:38.739312] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:12.250 [2024-07-24 10:45:38.739390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:12.250 [2024-07-24 10:45:38.745003] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e0e0 00:20:12.250 [2024-07-24 10:45:38.747341] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:12.250 10:45:38 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.185 10:45:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.444 10:45:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:13.444 "name": "raid_bdev1", 00:20:13.444 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:13.444 "strip_size_kb": 0, 00:20:13.444 "state": "online", 00:20:13.444 "raid_level": "raid1", 00:20:13.444 "superblock": true, 00:20:13.444 "num_base_bdevs": 2, 00:20:13.444 "num_base_bdevs_discovered": 2, 00:20:13.444 "num_base_bdevs_operational": 2, 00:20:13.444 "process": { 00:20:13.444 "type": "rebuild", 00:20:13.444 "target": "spare", 00:20:13.444 "progress": { 00:20:13.444 "blocks": 24576, 00:20:13.444 "percent": 38 00:20:13.444 } 00:20:13.444 }, 00:20:13.444 "base_bdevs_list": [ 00:20:13.444 { 00:20:13.444 "name": "spare", 00:20:13.444 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:13.444 "is_configured": true, 00:20:13.444 "data_offset": 2048, 00:20:13.444 "data_size": 63488 00:20:13.444 }, 00:20:13.444 { 00:20:13.444 "name": "BaseBdev2", 00:20:13.444 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:13.444 "is_configured": true, 00:20:13.444 "data_offset": 2048, 00:20:13.444 "data_size": 63488 00:20:13.444 } 00:20:13.444 ] 00:20:13.444 }' 00:20:13.444 10:45:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:13.444 10:45:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:13.444 10:45:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:13.444 10:45:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:13.444 10:45:40 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:13.703 [2024-07-24 10:45:40.365738] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:13.961 [2024-07-24 10:45:40.460522] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:13.961 [2024-07-24 10:45:40.460696] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.961 10:45:40 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.962 10:45:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.221 10:45:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:14.221 "name": "raid_bdev1", 00:20:14.221 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:14.221 "strip_size_kb": 0, 00:20:14.221 "state": "online", 00:20:14.221 "raid_level": "raid1", 00:20:14.221 "superblock": true, 00:20:14.221 "num_base_bdevs": 2, 00:20:14.221 "num_base_bdevs_discovered": 1, 00:20:14.221 "num_base_bdevs_operational": 1, 00:20:14.221 "base_bdevs_list": [ 00:20:14.221 { 00:20:14.221 "name": null, 00:20:14.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.221 "is_configured": false, 00:20:14.221 "data_offset": 2048, 00:20:14.221 "data_size": 63488 00:20:14.221 }, 00:20:14.221 { 00:20:14.221 "name": "BaseBdev2", 00:20:14.221 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:14.221 "is_configured": true, 00:20:14.221 "data_offset": 2048, 00:20:14.221 "data_size": 63488 00:20:14.221 } 00:20:14.221 ] 00:20:14.221 }' 00:20:14.221 10:45:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:14.221 10:45:40 -- common/autotest_common.sh@10 -- # set +x 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.788 10:45:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.046 10:45:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:15.046 "name": "raid_bdev1", 00:20:15.046 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:15.046 "strip_size_kb": 0, 00:20:15.046 "state": "online", 00:20:15.046 "raid_level": "raid1", 00:20:15.046 "superblock": true, 00:20:15.046 "num_base_bdevs": 2, 00:20:15.046 "num_base_bdevs_discovered": 1, 00:20:15.046 "num_base_bdevs_operational": 1, 00:20:15.046 "base_bdevs_list": [ 00:20:15.046 { 00:20:15.046 "name": null, 00:20:15.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.046 "is_configured": false, 00:20:15.046 "data_offset": 2048, 00:20:15.046 "data_size": 63488 00:20:15.046 }, 00:20:15.046 { 00:20:15.046 "name": "BaseBdev2", 00:20:15.046 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:15.046 "is_configured": true, 00:20:15.046 "data_offset": 2048, 00:20:15.046 "data_size": 63488 00:20:15.046 } 00:20:15.046 ] 00:20:15.046 }' 00:20:15.046 10:45:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:15.046 10:45:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:15.046 10:45:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:15.312 10:45:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:15.312 10:45:41 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.312 [2024-07-24 10:45:41.970735] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:15.312 [2024-07-24 10:45:41.970793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.312 [2024-07-24 10:45:41.976243] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:20:15.312 [2024-07-24 10:45:41.978504] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:15.312 10:45:41 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.687 10:45:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.687 10:45:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.687 "name": "raid_bdev1", 00:20:16.687 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:16.687 "strip_size_kb": 0, 00:20:16.687 "state": "online", 00:20:16.687 "raid_level": "raid1", 00:20:16.687 "superblock": true, 00:20:16.688 "num_base_bdevs": 2, 00:20:16.688 "num_base_bdevs_discovered": 2, 00:20:16.688 "num_base_bdevs_operational": 2, 00:20:16.688 "process": { 00:20:16.688 "type": "rebuild", 00:20:16.688 "target": "spare", 00:20:16.688 "progress": { 00:20:16.688 "blocks": 24576, 00:20:16.688 "percent": 38 00:20:16.688 } 00:20:16.688 }, 00:20:16.688 "base_bdevs_list": [ 00:20:16.688 { 00:20:16.688 "name": "spare", 00:20:16.688 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:16.688 "is_configured": true, 00:20:16.688 "data_offset": 2048, 00:20:16.688 "data_size": 63488 00:20:16.688 }, 00:20:16.688 { 00:20:16.688 "name": "BaseBdev2", 00:20:16.688 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:16.688 "is_configured": true, 00:20:16.688 "data_offset": 2048, 00:20:16.688 "data_size": 63488 00:20:16.688 } 00:20:16.688 ] 00:20:16.688 }' 00:20:16.688 10:45:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.688 10:45:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.688 10:45:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:16.946 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@657 -- # local timeout=423 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.946 "name": "raid_bdev1", 00:20:16.946 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:16.946 "strip_size_kb": 0, 00:20:16.946 "state": "online", 00:20:16.946 "raid_level": "raid1", 00:20:16.946 "superblock": true, 00:20:16.946 "num_base_bdevs": 2, 00:20:16.946 "num_base_bdevs_discovered": 2, 00:20:16.946 "num_base_bdevs_operational": 2, 00:20:16.946 "process": { 00:20:16.946 "type": "rebuild", 00:20:16.946 "target": "spare", 00:20:16.946 "progress": { 00:20:16.946 "blocks": 32768, 00:20:16.946 "percent": 51 00:20:16.946 } 00:20:16.946 }, 00:20:16.946 "base_bdevs_list": [ 00:20:16.946 { 00:20:16.946 "name": "spare", 00:20:16.946 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:16.946 "is_configured": true, 00:20:16.946 "data_offset": 2048, 00:20:16.946 "data_size": 63488 00:20:16.946 }, 00:20:16.946 { 00:20:16.946 "name": "BaseBdev2", 00:20:16.946 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:16.946 "is_configured": true, 00:20:16.946 "data_offset": 2048, 00:20:16.946 "data_size": 63488 00:20:16.946 } 00:20:16.946 ] 00:20:16.946 }' 00:20:16.946 10:45:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:17.204 10:45:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.204 10:45:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:17.204 10:45:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.204 10:45:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.140 10:45:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.399 10:45:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:18.399 "name": "raid_bdev1", 00:20:18.399 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:18.399 "strip_size_kb": 0, 00:20:18.399 "state": "online", 00:20:18.399 "raid_level": "raid1", 00:20:18.399 "superblock": true, 00:20:18.399 "num_base_bdevs": 2, 00:20:18.399 "num_base_bdevs_discovered": 2, 00:20:18.399 "num_base_bdevs_operational": 2, 00:20:18.399 "process": { 00:20:18.399 "type": "rebuild", 00:20:18.399 "target": "spare", 00:20:18.399 "progress": { 00:20:18.399 "blocks": 59392, 00:20:18.399 "percent": 93 00:20:18.399 } 00:20:18.399 }, 00:20:18.399 "base_bdevs_list": [ 00:20:18.399 { 00:20:18.399 "name": "spare", 00:20:18.399 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:18.399 "is_configured": true, 00:20:18.399 "data_offset": 2048, 00:20:18.399 "data_size": 63488 00:20:18.399 }, 00:20:18.399 { 00:20:18.399 "name": "BaseBdev2", 00:20:18.399 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:18.399 "is_configured": true, 00:20:18.399 "data_offset": 2048, 00:20:18.399 "data_size": 63488 00:20:18.399 } 00:20:18.399 ] 00:20:18.399 }' 00:20:18.399 10:45:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:18.399 10:45:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.399 10:45:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:18.399 10:45:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.399 10:45:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:18.658 [2024-07-24 10:45:45.098874] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:18.658 [2024-07-24 10:45:45.098975] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:18.658 [2024-07-24 10:45:45.099144] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.623 10:45:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:19.880 "name": "raid_bdev1", 00:20:19.880 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:19.880 "strip_size_kb": 0, 00:20:19.880 "state": "online", 00:20:19.880 "raid_level": "raid1", 00:20:19.880 "superblock": true, 00:20:19.880 "num_base_bdevs": 2, 00:20:19.880 "num_base_bdevs_discovered": 2, 00:20:19.880 "num_base_bdevs_operational": 2, 00:20:19.880 "base_bdevs_list": [ 00:20:19.880 { 00:20:19.880 "name": "spare", 00:20:19.880 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:19.880 "is_configured": true, 00:20:19.880 "data_offset": 2048, 00:20:19.880 "data_size": 63488 00:20:19.880 }, 00:20:19.880 { 00:20:19.880 "name": "BaseBdev2", 00:20:19.880 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:19.880 "is_configured": true, 00:20:19.880 "data_offset": 2048, 00:20:19.880 "data_size": 63488 00:20:19.880 } 00:20:19.880 ] 00:20:19.880 }' 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@660 -- # break 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.880 10:45:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:20.138 "name": "raid_bdev1", 00:20:20.138 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:20.138 "strip_size_kb": 0, 00:20:20.138 "state": "online", 00:20:20.138 "raid_level": "raid1", 00:20:20.138 "superblock": true, 00:20:20.138 "num_base_bdevs": 2, 00:20:20.138 "num_base_bdevs_discovered": 2, 00:20:20.138 "num_base_bdevs_operational": 2, 00:20:20.138 "base_bdevs_list": [ 00:20:20.138 { 00:20:20.138 "name": "spare", 00:20:20.138 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:20.138 "is_configured": true, 00:20:20.138 "data_offset": 2048, 00:20:20.138 "data_size": 63488 00:20:20.138 }, 00:20:20.138 { 00:20:20.138 "name": "BaseBdev2", 00:20:20.138 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:20.138 "is_configured": true, 00:20:20.138 "data_offset": 2048, 00:20:20.138 "data_size": 63488 00:20:20.138 } 00:20:20.138 ] 00:20:20.138 }' 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.138 10:45:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.395 10:45:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.395 10:45:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.395 "name": "raid_bdev1", 00:20:20.395 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:20.395 "strip_size_kb": 0, 00:20:20.395 "state": "online", 00:20:20.395 "raid_level": "raid1", 00:20:20.395 "superblock": true, 00:20:20.395 "num_base_bdevs": 2, 00:20:20.395 "num_base_bdevs_discovered": 2, 00:20:20.395 "num_base_bdevs_operational": 2, 00:20:20.395 "base_bdevs_list": [ 00:20:20.395 { 00:20:20.395 "name": "spare", 00:20:20.395 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:20.395 "is_configured": true, 00:20:20.395 "data_offset": 2048, 00:20:20.395 "data_size": 63488 00:20:20.396 }, 00:20:20.396 { 00:20:20.396 "name": "BaseBdev2", 00:20:20.396 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:20.396 "is_configured": true, 00:20:20.396 "data_offset": 2048, 00:20:20.396 "data_size": 63488 00:20:20.396 } 00:20:20.396 ] 00:20:20.396 }' 00:20:20.396 10:45:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.396 10:45:47 -- common/autotest_common.sh@10 -- # set +x 00:20:21.330 10:45:47 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:21.330 [2024-07-24 10:45:47.913582] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:21.330 [2024-07-24 10:45:47.913652] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.330 [2024-07-24 10:45:47.913808] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.330 [2024-07-24 10:45:47.913928] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.330 [2024-07-24 10:45:47.913945] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:20:21.330 10:45:47 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.330 10:45:47 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:21.587 10:45:48 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:21.587 10:45:48 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:21.587 10:45:48 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@12 -- # local i 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:21.587 10:45:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:21.845 /dev/nbd0 00:20:21.845 10:45:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:21.845 10:45:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:21.845 10:45:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:21.845 10:45:48 -- common/autotest_common.sh@857 -- # local i 00:20:21.845 10:45:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:21.845 10:45:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:21.845 10:45:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:21.845 10:45:48 -- common/autotest_common.sh@861 -- # break 00:20:21.845 10:45:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:21.845 10:45:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:21.845 10:45:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:21.845 1+0 records in 00:20:21.845 1+0 records out 00:20:21.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302013 s, 13.6 MB/s 00:20:21.845 10:45:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:21.845 10:45:48 -- common/autotest_common.sh@874 -- # size=4096 00:20:21.845 10:45:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:21.845 10:45:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:21.845 10:45:48 -- common/autotest_common.sh@877 -- # return 0 00:20:21.845 10:45:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:21.845 10:45:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:21.845 10:45:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:22.104 /dev/nbd1 00:20:22.104 10:45:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:22.104 10:45:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:22.104 10:45:48 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:22.104 10:45:48 -- common/autotest_common.sh@857 -- # local i 00:20:22.104 10:45:48 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:22.104 10:45:48 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:22.104 10:45:48 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:22.104 10:45:48 -- common/autotest_common.sh@861 -- # break 00:20:22.104 10:45:48 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:22.104 10:45:48 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:22.104 10:45:48 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.104 1+0 records in 00:20:22.104 1+0 records out 00:20:22.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422787 s, 9.7 MB/s 00:20:22.104 10:45:48 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.104 10:45:48 -- common/autotest_common.sh@874 -- # size=4096 00:20:22.104 10:45:48 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.104 10:45:48 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:22.104 10:45:48 -- common/autotest_common.sh@877 -- # return 0 00:20:22.104 10:45:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.104 10:45:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.104 10:45:48 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:22.362 10:45:48 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:22.362 10:45:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:22.362 10:45:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:22.362 10:45:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:22.362 10:45:48 -- bdev/nbd_common.sh@51 -- # local i 00:20:22.362 10:45:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.362 10:45:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@41 -- # break 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:22.620 10:45:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@41 -- # break 00:20:22.877 10:45:49 -- bdev/nbd_common.sh@45 -- # return 0 00:20:22.877 10:45:49 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:22.877 10:45:49 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:22.877 10:45:49 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:22.877 10:45:49 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:23.135 10:45:49 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:23.393 [2024-07-24 10:45:49.990445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:23.393 [2024-07-24 10:45:49.990597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.393 [2024-07-24 10:45:49.990648] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:23.393 [2024-07-24 10:45:49.990684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.394 [2024-07-24 10:45:49.993602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.394 [2024-07-24 10:45:49.993682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:23.394 [2024-07-24 10:45:49.993794] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:23.394 [2024-07-24 10:45:49.993884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:23.394 BaseBdev1 00:20:23.394 10:45:50 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:23.394 10:45:50 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:23.394 10:45:50 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:23.652 10:45:50 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:23.910 [2024-07-24 10:45:50.478566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:23.910 [2024-07-24 10:45:50.478721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.910 [2024-07-24 10:45:50.478808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:23.910 [2024-07-24 10:45:50.478843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.910 [2024-07-24 10:45:50.479398] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.910 [2024-07-24 10:45:50.479473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:23.910 [2024-07-24 10:45:50.479669] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:23.910 [2024-07-24 10:45:50.479689] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:23.910 [2024-07-24 10:45:50.479697] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:23.910 [2024-07-24 10:45:50.479748] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:20:23.910 [2024-07-24 10:45:50.479820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.910 BaseBdev2 00:20:23.910 10:45:50 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:24.168 10:45:50 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:24.426 [2024-07-24 10:45:50.990646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:24.426 [2024-07-24 10:45:50.990803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.426 [2024-07-24 10:45:50.990870] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:24.426 [2024-07-24 10:45:50.990901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.426 [2024-07-24 10:45:50.991552] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.426 [2024-07-24 10:45:50.991615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:24.426 [2024-07-24 10:45:50.991740] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:24.426 [2024-07-24 10:45:50.991782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.426 spare 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.426 10:45:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.426 [2024-07-24 10:45:51.091947] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:24.426 [2024-07-24 10:45:51.092000] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:24.427 [2024-07-24 10:45:51.092237] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:20:24.427 [2024-07-24 10:45:51.092752] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:24.427 [2024-07-24 10:45:51.092777] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:24.427 [2024-07-24 10:45:51.092949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.685 10:45:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.685 "name": "raid_bdev1", 00:20:24.685 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:24.685 "strip_size_kb": 0, 00:20:24.685 "state": "online", 00:20:24.685 "raid_level": "raid1", 00:20:24.685 "superblock": true, 00:20:24.685 "num_base_bdevs": 2, 00:20:24.685 "num_base_bdevs_discovered": 2, 00:20:24.685 "num_base_bdevs_operational": 2, 00:20:24.685 "base_bdevs_list": [ 00:20:24.685 { 00:20:24.685 "name": "spare", 00:20:24.685 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:24.685 "is_configured": true, 00:20:24.685 "data_offset": 2048, 00:20:24.685 "data_size": 63488 00:20:24.685 }, 00:20:24.685 { 00:20:24.685 "name": "BaseBdev2", 00:20:24.685 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:24.685 "is_configured": true, 00:20:24.685 "data_offset": 2048, 00:20:24.685 "data_size": 63488 00:20:24.685 } 00:20:24.685 ] 00:20:24.685 }' 00:20:24.685 10:45:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.685 10:45:51 -- common/autotest_common.sh@10 -- # set +x 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.251 10:45:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.509 10:45:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:25.509 "name": "raid_bdev1", 00:20:25.509 "uuid": "885b14f6-f165-4439-bece-a21ef53806ea", 00:20:25.509 "strip_size_kb": 0, 00:20:25.509 "state": "online", 00:20:25.509 "raid_level": "raid1", 00:20:25.509 "superblock": true, 00:20:25.509 "num_base_bdevs": 2, 00:20:25.509 "num_base_bdevs_discovered": 2, 00:20:25.509 "num_base_bdevs_operational": 2, 00:20:25.509 "base_bdevs_list": [ 00:20:25.509 { 00:20:25.509 "name": "spare", 00:20:25.509 "uuid": "68a2db6a-87df-5fc3-91b2-e73526973ab9", 00:20:25.509 "is_configured": true, 00:20:25.509 "data_offset": 2048, 00:20:25.509 "data_size": 63488 00:20:25.509 }, 00:20:25.509 { 00:20:25.509 "name": "BaseBdev2", 00:20:25.509 "uuid": "60ff16f2-af31-50f0-8c51-b93bc0799cea", 00:20:25.509 "is_configured": true, 00:20:25.509 "data_offset": 2048, 00:20:25.509 "data_size": 63488 00:20:25.509 } 00:20:25.509 ] 00:20:25.509 }' 00:20:25.509 10:45:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:25.767 10:45:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:25.767 10:45:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:25.767 10:45:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:25.767 10:45:52 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.767 10:45:52 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:26.026 10:45:52 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.026 10:45:52 -- bdev/bdev_raid.sh@709 -- # killprocess 133864 00:20:26.026 10:45:52 -- common/autotest_common.sh@926 -- # '[' -z 133864 ']' 00:20:26.026 10:45:52 -- common/autotest_common.sh@930 -- # kill -0 133864 00:20:26.026 10:45:52 -- common/autotest_common.sh@931 -- # uname 00:20:26.026 10:45:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.026 10:45:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133864 00:20:26.026 10:45:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:26.026 killing process with pid 133864 00:20:26.026 10:45:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:26.026 10:45:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133864' 00:20:26.026 10:45:52 -- common/autotest_common.sh@945 -- # kill 133864 00:20:26.026 Received shutdown signal, test time was about 60.000000 seconds 00:20:26.026 00:20:26.026 Latency(us) 00:20:26.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.026 =================================================================================================================== 00:20:26.026 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.026 10:45:52 -- common/autotest_common.sh@950 -- # wait 133864 00:20:26.026 [2024-07-24 10:45:52.563248] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.026 [2024-07-24 10:45:52.563406] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.026 [2024-07-24 10:45:52.563498] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.026 [2024-07-24 10:45:52.563532] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:26.026 [2024-07-24 10:45:52.605254] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:26.593 10:45:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:26.593 00:20:26.593 real 0m26.247s 00:20:26.593 user 0m38.411s 00:20:26.593 sys 0m4.489s 00:20:26.593 10:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.593 ************************************ 00:20:26.593 END TEST raid_rebuild_test_sb 00:20:26.593 ************************************ 00:20:26.593 10:45:52 -- common/autotest_common.sh@10 -- # set +x 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:26.593 10:45:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:26.593 10:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:26.593 10:45:53 -- common/autotest_common.sh@10 -- # set +x 00:20:26.593 ************************************ 00:20:26.593 START TEST raid_rebuild_test_io 00:20:26.593 ************************************ 00:20:26.593 10:45:53 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:26.593 10:45:53 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@544 -- # raid_pid=134506 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134506 /var/tmp/spdk-raid.sock 00:20:26.594 10:45:53 -- common/autotest_common.sh@819 -- # '[' -z 134506 ']' 00:20:26.594 10:45:53 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:26.594 10:45:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:26.594 10:45:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:26.594 10:45:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:26.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:26.594 10:45:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:26.594 10:45:53 -- common/autotest_common.sh@10 -- # set +x 00:20:26.594 [2024-07-24 10:45:53.090272] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:26.594 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:26.594 Zero copy mechanism will not be used. 00:20:26.594 [2024-07-24 10:45:53.090490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134506 ] 00:20:26.594 [2024-07-24 10:45:53.230773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.852 [2024-07-24 10:45:53.350765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.852 [2024-07-24 10:45:53.426568] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:27.462 10:45:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.462 10:45:54 -- common/autotest_common.sh@852 -- # return 0 00:20:27.462 10:45:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:27.462 10:45:54 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:27.462 10:45:54 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:27.720 BaseBdev1 00:20:27.720 10:45:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:27.720 10:45:54 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:27.720 10:45:54 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:27.979 BaseBdev2 00:20:27.979 10:45:54 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:28.238 spare_malloc 00:20:28.238 10:45:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:28.497 spare_delay 00:20:28.497 10:45:55 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:28.755 [2024-07-24 10:45:55.338505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:28.755 [2024-07-24 10:45:55.338695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.755 [2024-07-24 10:45:55.338758] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:28.755 [2024-07-24 10:45:55.338824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.755 [2024-07-24 10:45:55.341945] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.755 [2024-07-24 10:45:55.342016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:28.755 spare 00:20:28.755 10:45:55 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:29.013 [2024-07-24 10:45:55.562603] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:29.013 [2024-07-24 10:45:55.565220] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:29.013 [2024-07-24 10:45:55.565353] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:29.013 [2024-07-24 10:45:55.565370] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:29.013 [2024-07-24 10:45:55.565579] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:20:29.013 [2024-07-24 10:45:55.566093] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:29.013 [2024-07-24 10:45:55.566117] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007280 00:20:29.013 [2024-07-24 10:45:55.566407] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.013 10:45:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.271 10:45:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.271 "name": "raid_bdev1", 00:20:29.271 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:29.271 "strip_size_kb": 0, 00:20:29.271 "state": "online", 00:20:29.271 "raid_level": "raid1", 00:20:29.271 "superblock": false, 00:20:29.271 "num_base_bdevs": 2, 00:20:29.271 "num_base_bdevs_discovered": 2, 00:20:29.271 "num_base_bdevs_operational": 2, 00:20:29.271 "base_bdevs_list": [ 00:20:29.271 { 00:20:29.271 "name": "BaseBdev1", 00:20:29.271 "uuid": "2840c6ce-c14e-4adf-86fc-4c239c402fcc", 00:20:29.271 "is_configured": true, 00:20:29.271 "data_offset": 0, 00:20:29.271 "data_size": 65536 00:20:29.271 }, 00:20:29.271 { 00:20:29.271 "name": "BaseBdev2", 00:20:29.271 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:29.271 "is_configured": true, 00:20:29.271 "data_offset": 0, 00:20:29.271 "data_size": 65536 00:20:29.271 } 00:20:29.271 ] 00:20:29.271 }' 00:20:29.271 10:45:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.271 10:45:55 -- common/autotest_common.sh@10 -- # set +x 00:20:29.838 10:45:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:29.838 10:45:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:30.096 [2024-07-24 10:45:56.707080] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:30.096 10:45:56 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:30.096 10:45:56 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:30.096 10:45:56 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.354 10:45:57 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:30.354 10:45:57 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:30.354 10:45:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:30.354 10:45:57 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:30.613 [2024-07-24 10:45:57.114846] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:20:30.613 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:30.613 Zero copy mechanism will not be used. 00:20:30.613 Running I/O for 60 seconds... 00:20:30.613 [2024-07-24 10:45:57.224710] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.613 [2024-07-24 10:45:57.238736] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.613 10:45:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.871 10:45:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.871 "name": "raid_bdev1", 00:20:30.871 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:30.871 "strip_size_kb": 0, 00:20:30.871 "state": "online", 00:20:30.871 "raid_level": "raid1", 00:20:30.871 "superblock": false, 00:20:30.871 "num_base_bdevs": 2, 00:20:30.871 "num_base_bdevs_discovered": 1, 00:20:30.871 "num_base_bdevs_operational": 1, 00:20:30.871 "base_bdevs_list": [ 00:20:30.871 { 00:20:30.871 "name": null, 00:20:30.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.871 "is_configured": false, 00:20:30.871 "data_offset": 0, 00:20:30.871 "data_size": 65536 00:20:30.872 }, 00:20:30.872 { 00:20:30.872 "name": "BaseBdev2", 00:20:30.872 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:30.872 "is_configured": true, 00:20:30.872 "data_offset": 0, 00:20:30.872 "data_size": 65536 00:20:30.872 } 00:20:30.872 ] 00:20:30.872 }' 00:20:30.872 10:45:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.872 10:45:57 -- common/autotest_common.sh@10 -- # set +x 00:20:31.824 10:45:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:31.824 [2024-07-24 10:45:58.436921] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:31.824 [2024-07-24 10:45:58.437021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:31.824 10:45:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:31.824 [2024-07-24 10:45:58.494973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:31.824 [2024-07-24 10:45:58.497525] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.082 [2024-07-24 10:45:58.630020] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:32.082 [2024-07-24 10:45:58.630733] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:32.341 [2024-07-24 10:45:58.855383] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:32.341 [2024-07-24 10:45:58.855835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:32.600 [2024-07-24 10:45:59.189686] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:32.858 [2024-07-24 10:45:59.443149] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.859 10:45:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.117 [2024-07-24 10:45:59.696363] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:33.117 10:45:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:33.117 "name": "raid_bdev1", 00:20:33.117 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:33.117 "strip_size_kb": 0, 00:20:33.117 "state": "online", 00:20:33.117 "raid_level": "raid1", 00:20:33.118 "superblock": false, 00:20:33.118 "num_base_bdevs": 2, 00:20:33.118 "num_base_bdevs_discovered": 2, 00:20:33.118 "num_base_bdevs_operational": 2, 00:20:33.118 "process": { 00:20:33.118 "type": "rebuild", 00:20:33.118 "target": "spare", 00:20:33.118 "progress": { 00:20:33.118 "blocks": 14336, 00:20:33.118 "percent": 21 00:20:33.118 } 00:20:33.118 }, 00:20:33.118 "base_bdevs_list": [ 00:20:33.118 { 00:20:33.118 "name": "spare", 00:20:33.118 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:33.118 "is_configured": true, 00:20:33.118 "data_offset": 0, 00:20:33.118 "data_size": 65536 00:20:33.118 }, 00:20:33.118 { 00:20:33.118 "name": "BaseBdev2", 00:20:33.118 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:33.118 "is_configured": true, 00:20:33.118 "data_offset": 0, 00:20:33.118 "data_size": 65536 00:20:33.118 } 00:20:33.118 ] 00:20:33.118 }' 00:20:33.118 10:45:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:33.376 10:45:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:33.376 10:45:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:33.376 [2024-07-24 10:45:59.828064] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:33.376 [2024-07-24 10:45:59.828472] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:33.376 10:45:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:33.376 10:45:59 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:33.634 [2024-07-24 10:46:00.097794] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:33.634 [2024-07-24 10:46:00.201231] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:33.634 [2024-07-24 10:46:00.201918] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:33.634 [2024-07-24 10:46:00.310801] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:33.893 [2024-07-24 10:46:00.321104] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.893 [2024-07-24 10:46:00.352086] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000021f0 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.893 10:46:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.151 10:46:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:34.151 "name": "raid_bdev1", 00:20:34.151 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:34.151 "strip_size_kb": 0, 00:20:34.151 "state": "online", 00:20:34.151 "raid_level": "raid1", 00:20:34.151 "superblock": false, 00:20:34.151 "num_base_bdevs": 2, 00:20:34.151 "num_base_bdevs_discovered": 1, 00:20:34.151 "num_base_bdevs_operational": 1, 00:20:34.151 "base_bdevs_list": [ 00:20:34.151 { 00:20:34.151 "name": null, 00:20:34.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.151 "is_configured": false, 00:20:34.151 "data_offset": 0, 00:20:34.151 "data_size": 65536 00:20:34.151 }, 00:20:34.151 { 00:20:34.151 "name": "BaseBdev2", 00:20:34.151 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:34.151 "is_configured": true, 00:20:34.151 "data_offset": 0, 00:20:34.151 "data_size": 65536 00:20:34.151 } 00:20:34.151 ] 00:20:34.151 }' 00:20:34.151 10:46:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:34.151 10:46:00 -- common/autotest_common.sh@10 -- # set +x 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.717 10:46:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.976 10:46:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.976 "name": "raid_bdev1", 00:20:34.976 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:34.976 "strip_size_kb": 0, 00:20:34.976 "state": "online", 00:20:34.976 "raid_level": "raid1", 00:20:34.976 "superblock": false, 00:20:34.976 "num_base_bdevs": 2, 00:20:34.976 "num_base_bdevs_discovered": 1, 00:20:34.976 "num_base_bdevs_operational": 1, 00:20:34.976 "base_bdevs_list": [ 00:20:34.976 { 00:20:34.976 "name": null, 00:20:34.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.976 "is_configured": false, 00:20:34.976 "data_offset": 0, 00:20:34.976 "data_size": 65536 00:20:34.976 }, 00:20:34.976 { 00:20:34.976 "name": "BaseBdev2", 00:20:34.976 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:34.976 "is_configured": true, 00:20:34.976 "data_offset": 0, 00:20:34.976 "data_size": 65536 00:20:34.976 } 00:20:34.976 ] 00:20:34.976 }' 00:20:34.976 10:46:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.976 10:46:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:34.976 10:46:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.976 10:46:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:34.976 10:46:01 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:35.234 [2024-07-24 10:46:01.861444] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:35.234 [2024-07-24 10:46:01.861524] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.234 10:46:01 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:35.234 [2024-07-24 10:46:01.897858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:35.234 [2024-07-24 10:46:01.900302] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:35.518 [2024-07-24 10:46:02.017626] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:35.518 [2024-07-24 10:46:02.018335] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:35.786 [2024-07-24 10:46:02.235053] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:35.786 [2024-07-24 10:46:02.235467] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:36.044 [2024-07-24 10:46:02.560914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:36.044 [2024-07-24 10:46:02.677879] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:36.044 [2024-07-24 10:46:02.678259] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.303 10:46:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.561 [2024-07-24 10:46:03.016496] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:36.561 [2024-07-24 10:46:03.017197] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:36.561 10:46:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.561 "name": "raid_bdev1", 00:20:36.561 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:36.561 "strip_size_kb": 0, 00:20:36.561 "state": "online", 00:20:36.561 "raid_level": "raid1", 00:20:36.561 "superblock": false, 00:20:36.561 "num_base_bdevs": 2, 00:20:36.561 "num_base_bdevs_discovered": 2, 00:20:36.561 "num_base_bdevs_operational": 2, 00:20:36.561 "process": { 00:20:36.561 "type": "rebuild", 00:20:36.561 "target": "spare", 00:20:36.561 "progress": { 00:20:36.561 "blocks": 14336, 00:20:36.561 "percent": 21 00:20:36.561 } 00:20:36.561 }, 00:20:36.561 "base_bdevs_list": [ 00:20:36.561 { 00:20:36.561 "name": "spare", 00:20:36.561 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:36.561 "is_configured": true, 00:20:36.561 "data_offset": 0, 00:20:36.561 "data_size": 65536 00:20:36.561 }, 00:20:36.561 { 00:20:36.561 "name": "BaseBdev2", 00:20:36.561 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:36.561 "is_configured": true, 00:20:36.561 "data_offset": 0, 00:20:36.561 "data_size": 65536 00:20:36.561 } 00:20:36.561 ] 00:20:36.561 }' 00:20:36.561 10:46:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.561 10:46:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.561 10:46:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.561 [2024-07-24 10:46:03.226350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@657 -- # local timeout=443 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.820 10:46:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.078 10:46:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.078 "name": "raid_bdev1", 00:20:37.078 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:37.078 "strip_size_kb": 0, 00:20:37.078 "state": "online", 00:20:37.078 "raid_level": "raid1", 00:20:37.078 "superblock": false, 00:20:37.078 "num_base_bdevs": 2, 00:20:37.078 "num_base_bdevs_discovered": 2, 00:20:37.078 "num_base_bdevs_operational": 2, 00:20:37.078 "process": { 00:20:37.078 "type": "rebuild", 00:20:37.078 "target": "spare", 00:20:37.078 "progress": { 00:20:37.078 "blocks": 18432, 00:20:37.078 "percent": 28 00:20:37.078 } 00:20:37.078 }, 00:20:37.078 "base_bdevs_list": [ 00:20:37.078 { 00:20:37.078 "name": "spare", 00:20:37.078 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:37.078 "is_configured": true, 00:20:37.078 "data_offset": 0, 00:20:37.078 "data_size": 65536 00:20:37.078 }, 00:20:37.078 { 00:20:37.078 "name": "BaseBdev2", 00:20:37.078 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:37.078 "is_configured": true, 00:20:37.078 "data_offset": 0, 00:20:37.078 "data_size": 65536 00:20:37.078 } 00:20:37.078 ] 00:20:37.078 }' 00:20:37.078 10:46:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.078 [2024-07-24 10:46:03.546070] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:37.078 [2024-07-24 10:46:03.546754] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:37.078 10:46:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.078 10:46:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.078 10:46:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.078 10:46:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:37.078 [2024-07-24 10:46:03.759101] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:37.644 [2024-07-24 10:46:04.092529] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.210 10:46:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.468 10:46:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:38.468 "name": "raid_bdev1", 00:20:38.468 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:38.468 "strip_size_kb": 0, 00:20:38.468 "state": "online", 00:20:38.468 "raid_level": "raid1", 00:20:38.468 "superblock": false, 00:20:38.468 "num_base_bdevs": 2, 00:20:38.468 "num_base_bdevs_discovered": 2, 00:20:38.468 "num_base_bdevs_operational": 2, 00:20:38.468 "process": { 00:20:38.468 "type": "rebuild", 00:20:38.468 "target": "spare", 00:20:38.468 "progress": { 00:20:38.468 "blocks": 38912, 00:20:38.468 "percent": 59 00:20:38.468 } 00:20:38.468 }, 00:20:38.468 "base_bdevs_list": [ 00:20:38.468 { 00:20:38.468 "name": "spare", 00:20:38.468 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:38.468 "is_configured": true, 00:20:38.468 "data_offset": 0, 00:20:38.468 "data_size": 65536 00:20:38.468 }, 00:20:38.468 { 00:20:38.468 "name": "BaseBdev2", 00:20:38.468 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:38.468 "is_configured": true, 00:20:38.468 "data_offset": 0, 00:20:38.468 "data_size": 65536 00:20:38.468 } 00:20:38.468 ] 00:20:38.468 }' 00:20:38.468 10:46:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:38.468 10:46:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.468 10:46:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:38.468 10:46:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.468 10:46:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:38.468 [2024-07-24 10:46:05.147452] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:38.468 [2024-07-24 10:46:05.148217] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:39.034 [2024-07-24 10:46:05.588973] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:39.292 [2024-07-24 10:46:05.803322] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.550 10:46:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.815 10:46:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:39.815 "name": "raid_bdev1", 00:20:39.815 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:39.815 "strip_size_kb": 0, 00:20:39.815 "state": "online", 00:20:39.815 "raid_level": "raid1", 00:20:39.815 "superblock": false, 00:20:39.815 "num_base_bdevs": 2, 00:20:39.815 "num_base_bdevs_discovered": 2, 00:20:39.815 "num_base_bdevs_operational": 2, 00:20:39.815 "process": { 00:20:39.815 "type": "rebuild", 00:20:39.815 "target": "spare", 00:20:39.815 "progress": { 00:20:39.815 "blocks": 59392, 00:20:39.815 "percent": 90 00:20:39.815 } 00:20:39.815 }, 00:20:39.815 "base_bdevs_list": [ 00:20:39.815 { 00:20:39.815 "name": "spare", 00:20:39.815 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:39.815 "is_configured": true, 00:20:39.815 "data_offset": 0, 00:20:39.815 "data_size": 65536 00:20:39.815 }, 00:20:39.815 { 00:20:39.815 "name": "BaseBdev2", 00:20:39.815 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:39.815 "is_configured": true, 00:20:39.815 "data_offset": 0, 00:20:39.815 "data_size": 65536 00:20:39.815 } 00:20:39.815 ] 00:20:39.815 }' 00:20:39.815 10:46:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.815 10:46:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.815 10:46:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:39.815 10:46:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.815 10:46:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:39.815 [2024-07-24 10:46:06.481128] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:40.085 [2024-07-24 10:46:06.581102] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:40.085 [2024-07-24 10:46:06.591596] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:41.020 "name": "raid_bdev1", 00:20:41.020 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:41.020 "strip_size_kb": 0, 00:20:41.020 "state": "online", 00:20:41.020 "raid_level": "raid1", 00:20:41.020 "superblock": false, 00:20:41.020 "num_base_bdevs": 2, 00:20:41.020 "num_base_bdevs_discovered": 2, 00:20:41.020 "num_base_bdevs_operational": 2, 00:20:41.020 "base_bdevs_list": [ 00:20:41.020 { 00:20:41.020 "name": "spare", 00:20:41.020 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:41.020 "is_configured": true, 00:20:41.020 "data_offset": 0, 00:20:41.020 "data_size": 65536 00:20:41.020 }, 00:20:41.020 { 00:20:41.020 "name": "BaseBdev2", 00:20:41.020 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:41.020 "is_configured": true, 00:20:41.020 "data_offset": 0, 00:20:41.020 "data_size": 65536 00:20:41.020 } 00:20:41.020 ] 00:20:41.020 }' 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:41.020 10:46:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:41.278 10:46:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:41.278 10:46:07 -- bdev/bdev_raid.sh@660 -- # break 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.279 10:46:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.538 10:46:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:41.538 "name": "raid_bdev1", 00:20:41.538 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:41.538 "strip_size_kb": 0, 00:20:41.538 "state": "online", 00:20:41.538 "raid_level": "raid1", 00:20:41.538 "superblock": false, 00:20:41.538 "num_base_bdevs": 2, 00:20:41.538 "num_base_bdevs_discovered": 2, 00:20:41.538 "num_base_bdevs_operational": 2, 00:20:41.538 "base_bdevs_list": [ 00:20:41.538 { 00:20:41.538 "name": "spare", 00:20:41.538 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:41.538 "is_configured": true, 00:20:41.538 "data_offset": 0, 00:20:41.538 "data_size": 65536 00:20:41.538 }, 00:20:41.538 { 00:20:41.538 "name": "BaseBdev2", 00:20:41.538 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:41.538 "is_configured": true, 00:20:41.538 "data_offset": 0, 00:20:41.538 "data_size": 65536 00:20:41.538 } 00:20:41.538 ] 00:20:41.538 }' 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.538 10:46:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.797 10:46:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:41.797 "name": "raid_bdev1", 00:20:41.797 "uuid": "9d1a87e3-86f1-4ded-aab3-85ae742dbf75", 00:20:41.797 "strip_size_kb": 0, 00:20:41.797 "state": "online", 00:20:41.797 "raid_level": "raid1", 00:20:41.797 "superblock": false, 00:20:41.797 "num_base_bdevs": 2, 00:20:41.797 "num_base_bdevs_discovered": 2, 00:20:41.797 "num_base_bdevs_operational": 2, 00:20:41.797 "base_bdevs_list": [ 00:20:41.797 { 00:20:41.797 "name": "spare", 00:20:41.797 "uuid": "f4de129f-534f-5eaf-ab50-66ec3dbe022c", 00:20:41.797 "is_configured": true, 00:20:41.797 "data_offset": 0, 00:20:41.797 "data_size": 65536 00:20:41.797 }, 00:20:41.797 { 00:20:41.797 "name": "BaseBdev2", 00:20:41.797 "uuid": "cb7e8023-0506-40b0-9935-65f6af965b5d", 00:20:41.797 "is_configured": true, 00:20:41.797 "data_offset": 0, 00:20:41.797 "data_size": 65536 00:20:41.797 } 00:20:41.797 ] 00:20:41.797 }' 00:20:41.797 10:46:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:41.797 10:46:08 -- common/autotest_common.sh@10 -- # set +x 00:20:42.364 10:46:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:42.623 [2024-07-24 10:46:09.216691] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.623 [2024-07-24 10:46:09.216741] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.623 00:20:42.623 Latency(us) 00:20:42.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.623 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:42.623 raid_bdev1 : 12.13 104.64 313.93 0.00 0.00 12898.05 309.06 125829.12 00:20:42.623 =================================================================================================================== 00:20:42.623 Total : 104.64 313.93 0.00 0.00 12898.05 309.06 125829.12 00:20:42.623 [2024-07-24 10:46:09.249537] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.623 [2024-07-24 10:46:09.249765] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.623 0 00:20:42.623 [2024-07-24 10:46:09.249927] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.623 [2024-07-24 10:46:09.249946] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid_bdev1, state offline 00:20:42.623 10:46:09 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:42.623 10:46:09 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.882 10:46:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:42.882 10:46:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:42.882 10:46:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@12 -- # local i 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:42.882 10:46:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:43.141 /dev/nbd0 00:20:43.141 10:46:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:43.141 10:46:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:43.141 10:46:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:43.141 10:46:09 -- common/autotest_common.sh@857 -- # local i 00:20:43.141 10:46:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:43.141 10:46:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:43.141 10:46:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:43.141 10:46:09 -- common/autotest_common.sh@861 -- # break 00:20:43.141 10:46:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:43.141 10:46:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:43.141 10:46:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.141 1+0 records in 00:20:43.141 1+0 records out 00:20:43.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444928 s, 9.2 MB/s 00:20:43.141 10:46:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.141 10:46:09 -- common/autotest_common.sh@874 -- # size=4096 00:20:43.141 10:46:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.399 10:46:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:43.399 10:46:09 -- common/autotest_common.sh@877 -- # return 0 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.399 10:46:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:43.399 10:46:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:43.399 10:46:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@12 -- # local i 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.399 10:46:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:43.657 /dev/nbd1 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:43.657 10:46:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:43.657 10:46:10 -- common/autotest_common.sh@857 -- # local i 00:20:43.657 10:46:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:43.657 10:46:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:43.657 10:46:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:43.657 10:46:10 -- common/autotest_common.sh@861 -- # break 00:20:43.657 10:46:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:43.657 10:46:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:43.657 10:46:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:43.657 1+0 records in 00:20:43.657 1+0 records out 00:20:43.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580454 s, 7.1 MB/s 00:20:43.657 10:46:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.657 10:46:10 -- common/autotest_common.sh@874 -- # size=4096 00:20:43.657 10:46:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:43.657 10:46:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:43.657 10:46:10 -- common/autotest_common.sh@877 -- # return 0 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:43.657 10:46:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:43.657 10:46:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@51 -- # local i 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.657 10:46:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@41 -- # break 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@45 -- # return 0 00:20:43.914 10:46:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@51 -- # local i 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:43.914 10:46:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@41 -- # break 00:20:44.185 10:46:10 -- bdev/nbd_common.sh@45 -- # return 0 00:20:44.185 10:46:10 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:44.185 10:46:10 -- bdev/bdev_raid.sh@709 -- # killprocess 134506 00:20:44.185 10:46:10 -- common/autotest_common.sh@926 -- # '[' -z 134506 ']' 00:20:44.185 10:46:10 -- common/autotest_common.sh@930 -- # kill -0 134506 00:20:44.185 10:46:10 -- common/autotest_common.sh@931 -- # uname 00:20:44.185 10:46:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:44.185 10:46:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134506 00:20:44.185 10:46:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:44.185 10:46:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:44.185 10:46:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134506' 00:20:44.185 killing process with pid 134506 00:20:44.185 10:46:10 -- common/autotest_common.sh@945 -- # kill 134506 00:20:44.185 Received shutdown signal, test time was about 13.730524 seconds 00:20:44.185 00:20:44.185 Latency(us) 00:20:44.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.185 =================================================================================================================== 00:20:44.185 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.185 10:46:10 -- common/autotest_common.sh@950 -- # wait 134506 00:20:44.185 [2024-07-24 10:46:10.848616] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:44.467 [2024-07-24 10:46:10.886216] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:44.726 00:20:44.726 real 0m18.206s 00:20:44.726 user 0m28.419s 00:20:44.726 sys 0m2.070s 00:20:44.726 10:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.726 10:46:11 -- common/autotest_common.sh@10 -- # set +x 00:20:44.726 ************************************ 00:20:44.726 END TEST raid_rebuild_test_io 00:20:44.726 ************************************ 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:44.726 10:46:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:44.726 10:46:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:44.726 10:46:11 -- common/autotest_common.sh@10 -- # set +x 00:20:44.726 ************************************ 00:20:44.726 START TEST raid_rebuild_test_sb_io 00:20:44.726 ************************************ 00:20:44.726 10:46:11 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@544 -- # raid_pid=134989 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134989 /var/tmp/spdk-raid.sock 00:20:44.726 10:46:11 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:44.726 10:46:11 -- common/autotest_common.sh@819 -- # '[' -z 134989 ']' 00:20:44.726 10:46:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:44.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:44.726 10:46:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:44.726 10:46:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:44.726 10:46:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:44.726 10:46:11 -- common/autotest_common.sh@10 -- # set +x 00:20:44.726 [2024-07-24 10:46:11.371145] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:44.726 [2024-07-24 10:46:11.371680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134989 ] 00:20:44.726 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:44.726 Zero copy mechanism will not be used. 00:20:44.985 [2024-07-24 10:46:11.521807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.985 [2024-07-24 10:46:11.647524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.243 [2024-07-24 10:46:11.723740] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.810 10:46:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:45.810 10:46:12 -- common/autotest_common.sh@852 -- # return 0 00:20:45.810 10:46:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:45.810 10:46:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:45.810 10:46:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:46.069 BaseBdev1_malloc 00:20:46.069 10:46:12 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:46.327 [2024-07-24 10:46:12.841097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:46.327 [2024-07-24 10:46:12.841568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.327 [2024-07-24 10:46:12.841795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:20:46.327 [2024-07-24 10:46:12.841974] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.327 [2024-07-24 10:46:12.845145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.327 [2024-07-24 10:46:12.845359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:46.327 BaseBdev1 00:20:46.327 10:46:12 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:46.327 10:46:12 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:46.327 10:46:12 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:46.585 BaseBdev2_malloc 00:20:46.585 10:46:13 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:46.846 [2024-07-24 10:46:13.357110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:46.846 [2024-07-24 10:46:13.357579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:46.846 [2024-07-24 10:46:13.357681] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:20:46.846 [2024-07-24 10:46:13.358033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:46.846 [2024-07-24 10:46:13.361036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:46.846 [2024-07-24 10:46:13.361222] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:46.846 BaseBdev2 00:20:46.846 10:46:13 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:47.106 spare_malloc 00:20:47.106 10:46:13 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:47.364 spare_delay 00:20:47.364 10:46:13 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:47.623 [2024-07-24 10:46:14.107286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:47.623 [2024-07-24 10:46:14.107772] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:47.623 [2024-07-24 10:46:14.107987] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:47.623 [2024-07-24 10:46:14.108163] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:47.623 [2024-07-24 10:46:14.111123] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:47.623 [2024-07-24 10:46:14.111328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:47.623 spare 00:20:47.623 10:46:14 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:47.882 [2024-07-24 10:46:14.335971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:47.882 [2024-07-24 10:46:14.338658] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:47.882 [2024-07-24 10:46:14.339085] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:20:47.882 [2024-07-24 10:46:14.339249] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:47.882 [2024-07-24 10:46:14.339566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:20:47.882 [2024-07-24 10:46:14.340168] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:20:47.882 [2024-07-24 10:46:14.340313] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:20:47.882 [2024-07-24 10:46:14.340676] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.882 10:46:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.141 10:46:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:48.141 "name": "raid_bdev1", 00:20:48.141 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:48.141 "strip_size_kb": 0, 00:20:48.141 "state": "online", 00:20:48.141 "raid_level": "raid1", 00:20:48.141 "superblock": true, 00:20:48.141 "num_base_bdevs": 2, 00:20:48.141 "num_base_bdevs_discovered": 2, 00:20:48.141 "num_base_bdevs_operational": 2, 00:20:48.141 "base_bdevs_list": [ 00:20:48.141 { 00:20:48.141 "name": "BaseBdev1", 00:20:48.141 "uuid": "add0de6d-6e11-5b02-9145-9fc4fa68ea05", 00:20:48.141 "is_configured": true, 00:20:48.141 "data_offset": 2048, 00:20:48.141 "data_size": 63488 00:20:48.141 }, 00:20:48.141 { 00:20:48.141 "name": "BaseBdev2", 00:20:48.141 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:48.141 "is_configured": true, 00:20:48.141 "data_offset": 2048, 00:20:48.141 "data_size": 63488 00:20:48.141 } 00:20:48.141 ] 00:20:48.141 }' 00:20:48.141 10:46:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:48.141 10:46:14 -- common/autotest_common.sh@10 -- # set +x 00:20:48.709 10:46:15 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:48.709 10:46:15 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:48.967 [2024-07-24 10:46:15.457146] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.967 10:46:15 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:48.967 10:46:15 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:48.967 10:46:15 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.226 10:46:15 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:49.226 10:46:15 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:49.226 10:46:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:49.226 10:46:15 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:49.226 [2024-07-24 10:46:15.812559] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:20:49.226 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:49.226 Zero copy mechanism will not be used. 00:20:49.226 Running I/O for 60 seconds... 00:20:49.485 [2024-07-24 10:46:15.953414] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.485 [2024-07-24 10:46:15.954046] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.485 10:46:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.743 10:46:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.743 "name": "raid_bdev1", 00:20:49.743 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:49.743 "strip_size_kb": 0, 00:20:49.743 "state": "online", 00:20:49.743 "raid_level": "raid1", 00:20:49.743 "superblock": true, 00:20:49.743 "num_base_bdevs": 2, 00:20:49.743 "num_base_bdevs_discovered": 1, 00:20:49.743 "num_base_bdevs_operational": 1, 00:20:49.743 "base_bdevs_list": [ 00:20:49.743 { 00:20:49.743 "name": null, 00:20:49.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.743 "is_configured": false, 00:20:49.743 "data_offset": 2048, 00:20:49.743 "data_size": 63488 00:20:49.743 }, 00:20:49.743 { 00:20:49.743 "name": "BaseBdev2", 00:20:49.743 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:49.743 "is_configured": true, 00:20:49.743 "data_offset": 2048, 00:20:49.743 "data_size": 63488 00:20:49.743 } 00:20:49.743 ] 00:20:49.743 }' 00:20:49.743 10:46:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.743 10:46:16 -- common/autotest_common.sh@10 -- # set +x 00:20:50.310 10:46:16 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:50.568 [2024-07-24 10:46:17.164820] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:50.568 [2024-07-24 10:46:17.165260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.568 10:46:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:50.568 [2024-07-24 10:46:17.221188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:20:50.568 [2024-07-24 10:46:17.223953] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.827 [2024-07-24 10:46:17.347625] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:50.827 [2024-07-24 10:46:17.348639] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:51.085 [2024-07-24 10:46:17.557562] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:51.085 [2024-07-24 10:46:17.558365] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:51.343 [2024-07-24 10:46:17.925570] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:51.601 [2024-07-24 10:46:18.146114] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.601 10:46:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.860 10:46:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.860 "name": "raid_bdev1", 00:20:51.860 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:51.860 "strip_size_kb": 0, 00:20:51.860 "state": "online", 00:20:51.860 "raid_level": "raid1", 00:20:51.860 "superblock": true, 00:20:51.860 "num_base_bdevs": 2, 00:20:51.860 "num_base_bdevs_discovered": 2, 00:20:51.860 "num_base_bdevs_operational": 2, 00:20:51.860 "process": { 00:20:51.860 "type": "rebuild", 00:20:51.860 "target": "spare", 00:20:51.860 "progress": { 00:20:51.860 "blocks": 12288, 00:20:51.860 "percent": 19 00:20:51.860 } 00:20:51.860 }, 00:20:51.860 "base_bdevs_list": [ 00:20:51.860 { 00:20:51.860 "name": "spare", 00:20:51.860 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:20:51.860 "is_configured": true, 00:20:51.860 "data_offset": 2048, 00:20:51.860 "data_size": 63488 00:20:51.860 }, 00:20:51.860 { 00:20:51.860 "name": "BaseBdev2", 00:20:51.860 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:51.860 "is_configured": true, 00:20:51.860 "data_offset": 2048, 00:20:51.860 "data_size": 63488 00:20:51.860 } 00:20:51.860 ] 00:20:51.860 }' 00:20:51.860 10:46:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.860 10:46:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.860 10:46:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.860 [2024-07-24 10:46:18.520963] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:52.118 10:46:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.118 10:46:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:52.118 [2024-07-24 10:46:18.757127] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:52.118 [2024-07-24 10:46:18.789137] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:52.377 [2024-07-24 10:46:18.910019] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:52.377 [2024-07-24 10:46:18.929689] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.377 [2024-07-24 10:46:18.955929] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.377 10:46:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.377 10:46:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.636 10:46:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.636 "name": "raid_bdev1", 00:20:52.636 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:52.636 "strip_size_kb": 0, 00:20:52.636 "state": "online", 00:20:52.636 "raid_level": "raid1", 00:20:52.636 "superblock": true, 00:20:52.636 "num_base_bdevs": 2, 00:20:52.636 "num_base_bdevs_discovered": 1, 00:20:52.636 "num_base_bdevs_operational": 1, 00:20:52.636 "base_bdevs_list": [ 00:20:52.636 { 00:20:52.636 "name": null, 00:20:52.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.636 "is_configured": false, 00:20:52.636 "data_offset": 2048, 00:20:52.636 "data_size": 63488 00:20:52.636 }, 00:20:52.636 { 00:20:52.636 "name": "BaseBdev2", 00:20:52.636 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:52.636 "is_configured": true, 00:20:52.636 "data_offset": 2048, 00:20:52.636 "data_size": 63488 00:20:52.636 } 00:20:52.636 ] 00:20:52.636 }' 00:20:52.636 10:46:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.636 10:46:19 -- common/autotest_common.sh@10 -- # set +x 00:20:53.573 10:46:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:53.573 10:46:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.573 10:46:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:53.573 10:46:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:53.574 10:46:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.574 10:46:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.574 10:46:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.574 10:46:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.574 "name": "raid_bdev1", 00:20:53.574 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:53.574 "strip_size_kb": 0, 00:20:53.574 "state": "online", 00:20:53.574 "raid_level": "raid1", 00:20:53.574 "superblock": true, 00:20:53.574 "num_base_bdevs": 2, 00:20:53.574 "num_base_bdevs_discovered": 1, 00:20:53.574 "num_base_bdevs_operational": 1, 00:20:53.574 "base_bdevs_list": [ 00:20:53.574 { 00:20:53.574 "name": null, 00:20:53.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:53.574 "is_configured": false, 00:20:53.574 "data_offset": 2048, 00:20:53.574 "data_size": 63488 00:20:53.574 }, 00:20:53.574 { 00:20:53.574 "name": "BaseBdev2", 00:20:53.574 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:53.574 "is_configured": true, 00:20:53.574 "data_offset": 2048, 00:20:53.574 "data_size": 63488 00:20:53.574 } 00:20:53.574 ] 00:20:53.574 }' 00:20:53.574 10:46:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.836 10:46:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:53.836 10:46:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.836 10:46:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:53.836 10:46:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:54.095 [2024-07-24 10:46:20.589138] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:54.095 [2024-07-24 10:46:20.589534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:54.095 10:46:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:54.095 [2024-07-24 10:46:20.640386] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:20:54.095 [2024-07-24 10:46:20.643135] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:54.095 [2024-07-24 10:46:20.753161] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:54.095 [2024-07-24 10:46:20.754216] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:54.353 [2024-07-24 10:46:20.893161] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:54.612 [2024-07-24 10:46:21.234592] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:54.612 [2024-07-24 10:46:21.235632] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:54.870 [2024-07-24 10:46:21.447940] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.129 10:46:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.129 [2024-07-24 10:46:21.769264] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:55.129 [2024-07-24 10:46:21.770249] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:55.387 10:46:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.387 "name": "raid_bdev1", 00:20:55.387 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:55.387 "strip_size_kb": 0, 00:20:55.387 "state": "online", 00:20:55.387 "raid_level": "raid1", 00:20:55.387 "superblock": true, 00:20:55.387 "num_base_bdevs": 2, 00:20:55.387 "num_base_bdevs_discovered": 2, 00:20:55.387 "num_base_bdevs_operational": 2, 00:20:55.388 "process": { 00:20:55.388 "type": "rebuild", 00:20:55.388 "target": "spare", 00:20:55.388 "progress": { 00:20:55.388 "blocks": 14336, 00:20:55.388 "percent": 22 00:20:55.388 } 00:20:55.388 }, 00:20:55.388 "base_bdevs_list": [ 00:20:55.388 { 00:20:55.388 "name": "spare", 00:20:55.388 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:20:55.388 "is_configured": true, 00:20:55.388 "data_offset": 2048, 00:20:55.388 "data_size": 63488 00:20:55.388 }, 00:20:55.388 { 00:20:55.388 "name": "BaseBdev2", 00:20:55.388 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:55.388 "is_configured": true, 00:20:55.388 "data_offset": 2048, 00:20:55.388 "data_size": 63488 00:20:55.388 } 00:20:55.388 ] 00:20:55.388 }' 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:55.388 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@657 -- # local timeout=461 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.388 10:46:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.646 10:46:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.646 "name": "raid_bdev1", 00:20:55.646 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:55.646 "strip_size_kb": 0, 00:20:55.646 "state": "online", 00:20:55.646 "raid_level": "raid1", 00:20:55.646 "superblock": true, 00:20:55.646 "num_base_bdevs": 2, 00:20:55.646 "num_base_bdevs_discovered": 2, 00:20:55.646 "num_base_bdevs_operational": 2, 00:20:55.646 "process": { 00:20:55.646 "type": "rebuild", 00:20:55.646 "target": "spare", 00:20:55.646 "progress": { 00:20:55.646 "blocks": 18432, 00:20:55.646 "percent": 29 00:20:55.646 } 00:20:55.646 }, 00:20:55.646 "base_bdevs_list": [ 00:20:55.646 { 00:20:55.646 "name": "spare", 00:20:55.646 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:20:55.646 "is_configured": true, 00:20:55.646 "data_offset": 2048, 00:20:55.646 "data_size": 63488 00:20:55.646 }, 00:20:55.646 { 00:20:55.646 "name": "BaseBdev2", 00:20:55.646 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:55.646 "is_configured": true, 00:20:55.646 "data_offset": 2048, 00:20:55.646 "data_size": 63488 00:20:55.646 } 00:20:55.646 ] 00:20:55.646 }' 00:20:55.646 [2024-07-24 10:46:22.243218] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:55.646 10:46:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.647 [2024-07-24 10:46:22.244345] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:55.647 10:46:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.647 10:46:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.905 10:46:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.905 10:46:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:56.164 [2024-07-24 10:46:22.817880] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:56.164 [2024-07-24 10:46:22.818631] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:56.730 [2024-07-24 10:46:23.157582] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.730 10:46:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.730 [2024-07-24 10:46:23.384293] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:56.987 10:46:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.987 "name": "raid_bdev1", 00:20:56.987 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:56.987 "strip_size_kb": 0, 00:20:56.987 "state": "online", 00:20:56.987 "raid_level": "raid1", 00:20:56.987 "superblock": true, 00:20:56.987 "num_base_bdevs": 2, 00:20:56.987 "num_base_bdevs_discovered": 2, 00:20:56.987 "num_base_bdevs_operational": 2, 00:20:56.987 "process": { 00:20:56.987 "type": "rebuild", 00:20:56.987 "target": "spare", 00:20:56.987 "progress": { 00:20:56.987 "blocks": 36864, 00:20:56.987 "percent": 58 00:20:56.987 } 00:20:56.987 }, 00:20:56.987 "base_bdevs_list": [ 00:20:56.987 { 00:20:56.987 "name": "spare", 00:20:56.987 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:20:56.987 "is_configured": true, 00:20:56.987 "data_offset": 2048, 00:20:56.987 "data_size": 63488 00:20:56.987 }, 00:20:56.987 { 00:20:56.987 "name": "BaseBdev2", 00:20:56.987 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:56.987 "is_configured": true, 00:20:56.987 "data_offset": 2048, 00:20:56.987 "data_size": 63488 00:20:56.987 } 00:20:56.987 ] 00:20:56.987 }' 00:20:56.987 10:46:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.987 [2024-07-24 10:46:23.619804] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:56.987 10:46:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.987 10:46:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.244 10:46:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.244 10:46:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:57.501 [2024-07-24 10:46:24.091792] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:20:57.760 [2024-07-24 10:46:24.218095] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:58.326 "name": "raid_bdev1", 00:20:58.326 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:58.326 "strip_size_kb": 0, 00:20:58.326 "state": "online", 00:20:58.326 "raid_level": "raid1", 00:20:58.326 "superblock": true, 00:20:58.326 "num_base_bdevs": 2, 00:20:58.326 "num_base_bdevs_discovered": 2, 00:20:58.326 "num_base_bdevs_operational": 2, 00:20:58.326 "process": { 00:20:58.326 "type": "rebuild", 00:20:58.326 "target": "spare", 00:20:58.326 "progress": { 00:20:58.326 "blocks": 59392, 00:20:58.326 "percent": 93 00:20:58.326 } 00:20:58.326 }, 00:20:58.326 "base_bdevs_list": [ 00:20:58.326 { 00:20:58.326 "name": "spare", 00:20:58.326 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:20:58.326 "is_configured": true, 00:20:58.326 "data_offset": 2048, 00:20:58.326 "data_size": 63488 00:20:58.326 }, 00:20:58.326 { 00:20:58.326 "name": "BaseBdev2", 00:20:58.326 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:58.326 "is_configured": true, 00:20:58.326 "data_offset": 2048, 00:20:58.326 "data_size": 63488 00:20:58.326 } 00:20:58.326 ] 00:20:58.326 }' 00:20:58.326 10:46:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.650 10:46:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.650 10:46:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.650 10:46:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.650 10:46:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:58.650 [2024-07-24 10:46:25.121237] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:58.650 [2024-07-24 10:46:25.228921] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:58.650 [2024-07-24 10:46:25.231839] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.584 10:46:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.842 "name": "raid_bdev1", 00:20:59.842 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:20:59.842 "strip_size_kb": 0, 00:20:59.842 "state": "online", 00:20:59.842 "raid_level": "raid1", 00:20:59.842 "superblock": true, 00:20:59.842 "num_base_bdevs": 2, 00:20:59.842 "num_base_bdevs_discovered": 2, 00:20:59.842 "num_base_bdevs_operational": 2, 00:20:59.842 "base_bdevs_list": [ 00:20:59.842 { 00:20:59.842 "name": "spare", 00:20:59.842 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:20:59.842 "is_configured": true, 00:20:59.842 "data_offset": 2048, 00:20:59.842 "data_size": 63488 00:20:59.842 }, 00:20:59.842 { 00:20:59.842 "name": "BaseBdev2", 00:20:59.842 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:20:59.842 "is_configured": true, 00:20:59.842 "data_offset": 2048, 00:20:59.842 "data_size": 63488 00:20:59.842 } 00:20:59.842 ] 00:20:59.842 }' 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@660 -- # break 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.842 10:46:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.100 10:46:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:00.100 "name": "raid_bdev1", 00:21:00.100 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:21:00.100 "strip_size_kb": 0, 00:21:00.100 "state": "online", 00:21:00.100 "raid_level": "raid1", 00:21:00.100 "superblock": true, 00:21:00.100 "num_base_bdevs": 2, 00:21:00.100 "num_base_bdevs_discovered": 2, 00:21:00.100 "num_base_bdevs_operational": 2, 00:21:00.100 "base_bdevs_list": [ 00:21:00.100 { 00:21:00.100 "name": "spare", 00:21:00.100 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:21:00.100 "is_configured": true, 00:21:00.100 "data_offset": 2048, 00:21:00.100 "data_size": 63488 00:21:00.100 }, 00:21:00.100 { 00:21:00.100 "name": "BaseBdev2", 00:21:00.100 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:21:00.100 "is_configured": true, 00:21:00.100 "data_offset": 2048, 00:21:00.100 "data_size": 63488 00:21:00.100 } 00:21:00.100 ] 00:21:00.100 }' 00:21:00.100 10:46:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:00.100 10:46:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:00.100 10:46:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.358 10:46:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.615 10:46:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:00.615 "name": "raid_bdev1", 00:21:00.615 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:21:00.615 "strip_size_kb": 0, 00:21:00.615 "state": "online", 00:21:00.615 "raid_level": "raid1", 00:21:00.615 "superblock": true, 00:21:00.615 "num_base_bdevs": 2, 00:21:00.615 "num_base_bdevs_discovered": 2, 00:21:00.615 "num_base_bdevs_operational": 2, 00:21:00.615 "base_bdevs_list": [ 00:21:00.615 { 00:21:00.615 "name": "spare", 00:21:00.615 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:21:00.615 "is_configured": true, 00:21:00.615 "data_offset": 2048, 00:21:00.615 "data_size": 63488 00:21:00.615 }, 00:21:00.615 { 00:21:00.615 "name": "BaseBdev2", 00:21:00.615 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:21:00.615 "is_configured": true, 00:21:00.615 "data_offset": 2048, 00:21:00.615 "data_size": 63488 00:21:00.615 } 00:21:00.615 ] 00:21:00.615 }' 00:21:00.615 10:46:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:00.615 10:46:27 -- common/autotest_common.sh@10 -- # set +x 00:21:01.180 10:46:27 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:01.437 [2024-07-24 10:46:27.979543] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.437 [2024-07-24 10:46:27.979893] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.437 00:21:01.437 Latency(us) 00:21:01.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.437 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:01.437 raid_bdev1 : 12.26 102.25 306.75 0.00 0.00 13074.51 310.92 119156.36 00:21:01.437 =================================================================================================================== 00:21:01.437 Total : 102.25 306.75 0.00 0.00 13074.51 310.92 119156.36 00:21:01.437 [2024-07-24 10:46:28.085570] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.437 [2024-07-24 10:46:28.085815] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.437 [2024-07-24 10:46:28.085979] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.437 0 00:21:01.437 [2024-07-24 10:46:28.086236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:21:01.437 10:46:28 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:01.437 10:46:28 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.005 10:46:28 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:02.005 10:46:28 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:02.005 10:46:28 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@12 -- # local i 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:02.005 /dev/nbd0 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:02.005 10:46:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:02.005 10:46:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:02.005 10:46:28 -- common/autotest_common.sh@857 -- # local i 00:21:02.005 10:46:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:02.005 10:46:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:02.005 10:46:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:02.005 10:46:28 -- common/autotest_common.sh@861 -- # break 00:21:02.005 10:46:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:02.005 10:46:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:02.005 10:46:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.005 1+0 records in 00:21:02.005 1+0 records out 00:21:02.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005261 s, 7.8 MB/s 00:21:02.005 10:46:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.005 10:46:28 -- common/autotest_common.sh@874 -- # size=4096 00:21:02.005 10:46:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.005 10:46:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:02.005 10:46:28 -- common/autotest_common.sh@877 -- # return 0 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.006 10:46:28 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:02.006 10:46:28 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:02.006 10:46:28 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@12 -- # local i 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.006 10:46:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:02.275 /dev/nbd1 00:21:02.275 10:46:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:02.275 10:46:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:02.275 10:46:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:02.275 10:46:28 -- common/autotest_common.sh@857 -- # local i 00:21:02.275 10:46:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:02.275 10:46:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:02.275 10:46:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:02.275 10:46:28 -- common/autotest_common.sh@861 -- # break 00:21:02.275 10:46:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:02.275 10:46:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:02.275 10:46:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:02.275 1+0 records in 00:21:02.275 1+0 records out 00:21:02.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611336 s, 6.7 MB/s 00:21:02.275 10:46:28 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.275 10:46:28 -- common/autotest_common.sh@874 -- # size=4096 00:21:02.275 10:46:28 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:02.532 10:46:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:02.532 10:46:28 -- common/autotest_common.sh@877 -- # return 0 00:21:02.532 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:02.532 10:46:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:02.532 10:46:28 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:02.532 10:46:29 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:02.532 10:46:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.532 10:46:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:02.532 10:46:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:02.532 10:46:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:02.532 10:46:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.532 10:46:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@41 -- # break 00:21:02.789 10:46:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.789 10:46:29 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:02.790 10:46:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:02.790 10:46:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:02.790 10:46:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:02.790 10:46:29 -- bdev/nbd_common.sh@51 -- # local i 00:21:02.790 10:46:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.790 10:46:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@41 -- # break 00:21:03.046 10:46:29 -- bdev/nbd_common.sh@45 -- # return 0 00:21:03.046 10:46:29 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:03.046 10:46:29 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:03.046 10:46:29 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:03.046 10:46:29 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:03.303 10:46:29 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:03.561 [2024-07-24 10:46:30.110314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:03.561 [2024-07-24 10:46:30.110837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.561 [2024-07-24 10:46:30.111009] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:03.561 [2024-07-24 10:46:30.111209] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.561 [2024-07-24 10:46:30.114099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.561 [2024-07-24 10:46:30.114321] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:03.561 [2024-07-24 10:46:30.114542] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:03.561 [2024-07-24 10:46:30.114721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.561 BaseBdev1 00:21:03.561 10:46:30 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:03.561 10:46:30 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:03.561 10:46:30 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:03.819 10:46:30 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:04.078 [2024-07-24 10:46:30.578879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:04.078 [2024-07-24 10:46:30.579351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.078 [2024-07-24 10:46:30.579542] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:04.078 [2024-07-24 10:46:30.579681] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.078 [2024-07-24 10:46:30.580363] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.078 [2024-07-24 10:46:30.580557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:04.078 [2024-07-24 10:46:30.580782] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:04.078 [2024-07-24 10:46:30.580908] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:04.078 [2024-07-24 10:46:30.581017] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.078 [2024-07-24 10:46:30.581087] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state configuring 00:21:04.078 [2024-07-24 10:46:30.581379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.078 BaseBdev2 00:21:04.078 10:46:30 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:04.336 10:46:30 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:04.595 [2024-07-24 10:46:31.103083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:04.595 [2024-07-24 10:46:31.103593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.595 [2024-07-24 10:46:31.103825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:04.595 [2024-07-24 10:46:31.103971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.595 [2024-07-24 10:46:31.104701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.595 [2024-07-24 10:46:31.104902] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:04.595 [2024-07-24 10:46:31.105166] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:04.595 [2024-07-24 10:46:31.105341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.595 spare 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.595 10:46:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.595 [2024-07-24 10:46:31.205628] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:21:04.595 [2024-07-24 10:46:31.205951] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:04.595 [2024-07-24 10:46:31.206281] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:21:04.595 [2024-07-24 10:46:31.207019] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:21:04.595 [2024-07-24 10:46:31.207189] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:21:04.595 [2024-07-24 10:46:31.207558] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.854 10:46:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:04.854 "name": "raid_bdev1", 00:21:04.854 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:21:04.854 "strip_size_kb": 0, 00:21:04.854 "state": "online", 00:21:04.854 "raid_level": "raid1", 00:21:04.854 "superblock": true, 00:21:04.854 "num_base_bdevs": 2, 00:21:04.854 "num_base_bdevs_discovered": 2, 00:21:04.854 "num_base_bdevs_operational": 2, 00:21:04.854 "base_bdevs_list": [ 00:21:04.854 { 00:21:04.854 "name": "spare", 00:21:04.854 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:21:04.854 "is_configured": true, 00:21:04.854 "data_offset": 2048, 00:21:04.854 "data_size": 63488 00:21:04.854 }, 00:21:04.854 { 00:21:04.854 "name": "BaseBdev2", 00:21:04.854 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:21:04.854 "is_configured": true, 00:21:04.854 "data_offset": 2048, 00:21:04.854 "data_size": 63488 00:21:04.854 } 00:21:04.854 ] 00:21:04.854 }' 00:21:04.854 10:46:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:04.854 10:46:31 -- common/autotest_common.sh@10 -- # set +x 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.420 10:46:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.679 10:46:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:05.679 "name": "raid_bdev1", 00:21:05.679 "uuid": "f33d3ad3-954b-4ad5-95dd-74b6a2ceaa56", 00:21:05.679 "strip_size_kb": 0, 00:21:05.679 "state": "online", 00:21:05.679 "raid_level": "raid1", 00:21:05.679 "superblock": true, 00:21:05.679 "num_base_bdevs": 2, 00:21:05.679 "num_base_bdevs_discovered": 2, 00:21:05.679 "num_base_bdevs_operational": 2, 00:21:05.679 "base_bdevs_list": [ 00:21:05.679 { 00:21:05.679 "name": "spare", 00:21:05.679 "uuid": "685925dd-e0fe-5eb5-b185-2e95b44bb15e", 00:21:05.679 "is_configured": true, 00:21:05.679 "data_offset": 2048, 00:21:05.679 "data_size": 63488 00:21:05.679 }, 00:21:05.679 { 00:21:05.679 "name": "BaseBdev2", 00:21:05.679 "uuid": "4602fa7b-7fef-5a1f-868a-24db8b5d75aa", 00:21:05.679 "is_configured": true, 00:21:05.679 "data_offset": 2048, 00:21:05.679 "data_size": 63488 00:21:05.679 } 00:21:05.679 ] 00:21:05.679 }' 00:21:05.679 10:46:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:05.937 10:46:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:05.937 10:46:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:05.937 10:46:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:05.937 10:46:32 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.937 10:46:32 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:06.195 10:46:32 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.195 10:46:32 -- bdev/bdev_raid.sh@709 -- # killprocess 134989 00:21:06.195 10:46:32 -- common/autotest_common.sh@926 -- # '[' -z 134989 ']' 00:21:06.195 10:46:32 -- common/autotest_common.sh@930 -- # kill -0 134989 00:21:06.195 10:46:32 -- common/autotest_common.sh@931 -- # uname 00:21:06.195 10:46:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:06.195 10:46:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134989 00:21:06.195 10:46:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:06.195 10:46:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:06.195 10:46:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134989' 00:21:06.195 killing process with pid 134989 00:21:06.195 10:46:32 -- common/autotest_common.sh@945 -- # kill 134989 00:21:06.195 Received shutdown signal, test time was about 16.935062 seconds 00:21:06.195 00:21:06.195 Latency(us) 00:21:06.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.195 =================================================================================================================== 00:21:06.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:06.195 10:46:32 -- common/autotest_common.sh@950 -- # wait 134989 00:21:06.195 [2024-07-24 10:46:32.750905] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:06.195 [2024-07-24 10:46:32.751183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.195 [2024-07-24 10:46:32.751458] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.195 [2024-07-24 10:46:32.751612] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:21:06.195 [2024-07-24 10:46:32.792809] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:06.762 ************************************ 00:21:06.762 END TEST raid_rebuild_test_sb_io 00:21:06.762 ************************************ 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:06.762 00:21:06.762 real 0m21.843s 00:21:06.762 user 0m35.508s 00:21:06.762 sys 0m2.353s 00:21:06.762 10:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.762 10:46:33 -- common/autotest_common.sh@10 -- # set +x 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:06.762 10:46:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:06.762 10:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:06.762 10:46:33 -- common/autotest_common.sh@10 -- # set +x 00:21:06.762 ************************************ 00:21:06.762 START TEST raid_rebuild_test 00:21:06.762 ************************************ 00:21:06.762 10:46:33 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@544 -- # raid_pid=135569 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135569 /var/tmp/spdk-raid.sock 00:21:06.762 10:46:33 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:06.762 10:46:33 -- common/autotest_common.sh@819 -- # '[' -z 135569 ']' 00:21:06.762 10:46:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:06.762 10:46:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:06.762 10:46:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:06.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:06.763 10:46:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:06.763 10:46:33 -- common/autotest_common.sh@10 -- # set +x 00:21:06.763 [2024-07-24 10:46:33.264656] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:06.763 [2024-07-24 10:46:33.265138] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135569 ] 00:21:06.763 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:06.763 Zero copy mechanism will not be used. 00:21:06.763 [2024-07-24 10:46:33.405604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.020 [2024-07-24 10:46:33.531394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.020 [2024-07-24 10:46:33.608451] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:07.956 10:46:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:07.956 10:46:34 -- common/autotest_common.sh@852 -- # return 0 00:21:07.956 10:46:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:07.956 10:46:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:07.956 10:46:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:07.956 BaseBdev1 00:21:07.956 10:46:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:07.956 10:46:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:07.956 10:46:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:08.214 BaseBdev2 00:21:08.214 10:46:34 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:08.214 10:46:34 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:08.214 10:46:34 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:08.472 BaseBdev3 00:21:08.472 10:46:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:08.472 10:46:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:08.472 10:46:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:08.730 BaseBdev4 00:21:08.730 10:46:35 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:08.988 spare_malloc 00:21:08.988 10:46:35 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:09.246 spare_delay 00:21:09.246 10:46:35 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:09.505 [2024-07-24 10:46:36.011638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:09.505 [2024-07-24 10:46:36.012200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.505 [2024-07-24 10:46:36.012387] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:09.505 [2024-07-24 10:46:36.012563] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.505 [2024-07-24 10:46:36.015759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.505 [2024-07-24 10:46:36.016002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:09.505 spare 00:21:09.505 10:46:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:09.764 [2024-07-24 10:46:36.244637] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:09.764 [2024-07-24 10:46:36.247378] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:09.764 [2024-07-24 10:46:36.247634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.764 [2024-07-24 10:46:36.247833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:09.764 [2024-07-24 10:46:36.248053] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:21:09.764 [2024-07-24 10:46:36.248184] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:09.764 [2024-07-24 10:46:36.248448] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:21:09.764 [2024-07-24 10:46:36.249112] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:21:09.764 [2024-07-24 10:46:36.249269] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:21:09.764 [2024-07-24 10:46:36.249669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.764 10:46:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.023 10:46:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:10.023 "name": "raid_bdev1", 00:21:10.023 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:10.023 "strip_size_kb": 0, 00:21:10.023 "state": "online", 00:21:10.023 "raid_level": "raid1", 00:21:10.023 "superblock": false, 00:21:10.023 "num_base_bdevs": 4, 00:21:10.023 "num_base_bdevs_discovered": 4, 00:21:10.023 "num_base_bdevs_operational": 4, 00:21:10.023 "base_bdevs_list": [ 00:21:10.023 { 00:21:10.023 "name": "BaseBdev1", 00:21:10.023 "uuid": "c738c9eb-ef33-41dd-9045-0f0a911df18a", 00:21:10.023 "is_configured": true, 00:21:10.023 "data_offset": 0, 00:21:10.023 "data_size": 65536 00:21:10.023 }, 00:21:10.023 { 00:21:10.023 "name": "BaseBdev2", 00:21:10.023 "uuid": "ce36187a-03f3-48bc-80cc-fcbdb54390f8", 00:21:10.023 "is_configured": true, 00:21:10.023 "data_offset": 0, 00:21:10.023 "data_size": 65536 00:21:10.023 }, 00:21:10.023 { 00:21:10.024 "name": "BaseBdev3", 00:21:10.024 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:10.024 "is_configured": true, 00:21:10.024 "data_offset": 0, 00:21:10.024 "data_size": 65536 00:21:10.024 }, 00:21:10.024 { 00:21:10.024 "name": "BaseBdev4", 00:21:10.024 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:10.024 "is_configured": true, 00:21:10.024 "data_offset": 0, 00:21:10.024 "data_size": 65536 00:21:10.024 } 00:21:10.024 ] 00:21:10.024 }' 00:21:10.024 10:46:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:10.024 10:46:36 -- common/autotest_common.sh@10 -- # set +x 00:21:10.591 10:46:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:10.591 10:46:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:10.850 [2024-07-24 10:46:37.362198] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:10.850 10:46:37 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:10.850 10:46:37 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.850 10:46:37 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:11.108 10:46:37 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:11.108 10:46:37 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:11.108 10:46:37 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:11.108 10:46:37 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:11.108 10:46:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:11.108 10:46:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@12 -- # local i 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.109 10:46:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:11.367 [2024-07-24 10:46:37.898190] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:21:11.367 /dev/nbd0 00:21:11.367 10:46:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:11.367 10:46:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:11.367 10:46:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:11.367 10:46:37 -- common/autotest_common.sh@857 -- # local i 00:21:11.367 10:46:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:11.367 10:46:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:11.367 10:46:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:11.367 10:46:37 -- common/autotest_common.sh@861 -- # break 00:21:11.367 10:46:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:11.367 10:46:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:11.367 10:46:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:11.367 1+0 records in 00:21:11.367 1+0 records out 00:21:11.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098307 s, 4.2 MB/s 00:21:11.367 10:46:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.367 10:46:37 -- common/autotest_common.sh@874 -- # size=4096 00:21:11.367 10:46:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:11.367 10:46:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:11.367 10:46:37 -- common/autotest_common.sh@877 -- # return 0 00:21:11.367 10:46:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:11.367 10:46:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:11.367 10:46:37 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:11.367 10:46:37 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:11.367 10:46:37 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:17.927 65536+0 records in 00:21:17.927 65536+0 records out 00:21:17.927 33554432 bytes (34 MB, 32 MiB) copied, 6.16371 s, 5.4 MB/s 00:21:17.927 10:46:44 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@51 -- # local i 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:17.927 [2024-07-24 10:46:44.451625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@41 -- # break 00:21:17.927 10:46:44 -- bdev/nbd_common.sh@45 -- # return 0 00:21:17.927 10:46:44 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:18.186 [2024-07-24 10:46:44.711298] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:18.186 10:46:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.186 10:46:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.187 10:46:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.445 10:46:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.445 "name": "raid_bdev1", 00:21:18.445 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:18.445 "strip_size_kb": 0, 00:21:18.445 "state": "online", 00:21:18.445 "raid_level": "raid1", 00:21:18.445 "superblock": false, 00:21:18.445 "num_base_bdevs": 4, 00:21:18.445 "num_base_bdevs_discovered": 3, 00:21:18.445 "num_base_bdevs_operational": 3, 00:21:18.445 "base_bdevs_list": [ 00:21:18.445 { 00:21:18.445 "name": null, 00:21:18.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.445 "is_configured": false, 00:21:18.445 "data_offset": 0, 00:21:18.445 "data_size": 65536 00:21:18.445 }, 00:21:18.445 { 00:21:18.445 "name": "BaseBdev2", 00:21:18.445 "uuid": "ce36187a-03f3-48bc-80cc-fcbdb54390f8", 00:21:18.445 "is_configured": true, 00:21:18.445 "data_offset": 0, 00:21:18.445 "data_size": 65536 00:21:18.445 }, 00:21:18.445 { 00:21:18.445 "name": "BaseBdev3", 00:21:18.445 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:18.445 "is_configured": true, 00:21:18.445 "data_offset": 0, 00:21:18.445 "data_size": 65536 00:21:18.445 }, 00:21:18.445 { 00:21:18.445 "name": "BaseBdev4", 00:21:18.445 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:18.445 "is_configured": true, 00:21:18.445 "data_offset": 0, 00:21:18.445 "data_size": 65536 00:21:18.445 } 00:21:18.445 ] 00:21:18.445 }' 00:21:18.445 10:46:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.445 10:46:44 -- common/autotest_common.sh@10 -- # set +x 00:21:19.011 10:46:45 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:19.270 [2024-07-24 10:46:45.863633] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:19.270 [2024-07-24 10:46:45.863996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:19.270 [2024-07-24 10:46:45.870080] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06080 00:21:19.270 [2024-07-24 10:46:45.872874] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.270 10:46:45 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.645 10:46:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.645 10:46:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.645 "name": "raid_bdev1", 00:21:20.645 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:20.645 "strip_size_kb": 0, 00:21:20.645 "state": "online", 00:21:20.645 "raid_level": "raid1", 00:21:20.645 "superblock": false, 00:21:20.645 "num_base_bdevs": 4, 00:21:20.645 "num_base_bdevs_discovered": 4, 00:21:20.645 "num_base_bdevs_operational": 4, 00:21:20.645 "process": { 00:21:20.645 "type": "rebuild", 00:21:20.645 "target": "spare", 00:21:20.645 "progress": { 00:21:20.645 "blocks": 24576, 00:21:20.645 "percent": 37 00:21:20.645 } 00:21:20.645 }, 00:21:20.645 "base_bdevs_list": [ 00:21:20.645 { 00:21:20.645 "name": "spare", 00:21:20.645 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:20.645 "is_configured": true, 00:21:20.645 "data_offset": 0, 00:21:20.645 "data_size": 65536 00:21:20.645 }, 00:21:20.645 { 00:21:20.645 "name": "BaseBdev2", 00:21:20.645 "uuid": "ce36187a-03f3-48bc-80cc-fcbdb54390f8", 00:21:20.645 "is_configured": true, 00:21:20.645 "data_offset": 0, 00:21:20.645 "data_size": 65536 00:21:20.645 }, 00:21:20.645 { 00:21:20.645 "name": "BaseBdev3", 00:21:20.645 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:20.645 "is_configured": true, 00:21:20.645 "data_offset": 0, 00:21:20.645 "data_size": 65536 00:21:20.645 }, 00:21:20.645 { 00:21:20.645 "name": "BaseBdev4", 00:21:20.645 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:20.645 "is_configured": true, 00:21:20.645 "data_offset": 0, 00:21:20.645 "data_size": 65536 00:21:20.645 } 00:21:20.645 ] 00:21:20.645 }' 00:21:20.645 10:46:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:20.645 10:46:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:20.645 10:46:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:20.645 10:46:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:20.645 10:46:47 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:20.903 [2024-07-24 10:46:47.451307] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:20.903 [2024-07-24 10:46:47.486293] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:20.903 [2024-07-24 10:46:47.486737] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.903 10:46:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.161 10:46:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.161 "name": "raid_bdev1", 00:21:21.161 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:21.161 "strip_size_kb": 0, 00:21:21.161 "state": "online", 00:21:21.161 "raid_level": "raid1", 00:21:21.161 "superblock": false, 00:21:21.161 "num_base_bdevs": 4, 00:21:21.161 "num_base_bdevs_discovered": 3, 00:21:21.161 "num_base_bdevs_operational": 3, 00:21:21.161 "base_bdevs_list": [ 00:21:21.161 { 00:21:21.161 "name": null, 00:21:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.161 "is_configured": false, 00:21:21.161 "data_offset": 0, 00:21:21.161 "data_size": 65536 00:21:21.161 }, 00:21:21.161 { 00:21:21.161 "name": "BaseBdev2", 00:21:21.161 "uuid": "ce36187a-03f3-48bc-80cc-fcbdb54390f8", 00:21:21.161 "is_configured": true, 00:21:21.161 "data_offset": 0, 00:21:21.161 "data_size": 65536 00:21:21.161 }, 00:21:21.161 { 00:21:21.161 "name": "BaseBdev3", 00:21:21.161 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:21.161 "is_configured": true, 00:21:21.161 "data_offset": 0, 00:21:21.161 "data_size": 65536 00:21:21.161 }, 00:21:21.161 { 00:21:21.161 "name": "BaseBdev4", 00:21:21.161 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:21.161 "is_configured": true, 00:21:21.161 "data_offset": 0, 00:21:21.161 "data_size": 65536 00:21:21.161 } 00:21:21.161 ] 00:21:21.161 }' 00:21:21.161 10:46:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.161 10:46:47 -- common/autotest_common.sh@10 -- # set +x 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.726 10:46:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.292 10:46:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.292 "name": "raid_bdev1", 00:21:22.292 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:22.292 "strip_size_kb": 0, 00:21:22.292 "state": "online", 00:21:22.292 "raid_level": "raid1", 00:21:22.292 "superblock": false, 00:21:22.292 "num_base_bdevs": 4, 00:21:22.292 "num_base_bdevs_discovered": 3, 00:21:22.292 "num_base_bdevs_operational": 3, 00:21:22.292 "base_bdevs_list": [ 00:21:22.292 { 00:21:22.293 "name": null, 00:21:22.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.293 "is_configured": false, 00:21:22.293 "data_offset": 0, 00:21:22.293 "data_size": 65536 00:21:22.293 }, 00:21:22.293 { 00:21:22.293 "name": "BaseBdev2", 00:21:22.293 "uuid": "ce36187a-03f3-48bc-80cc-fcbdb54390f8", 00:21:22.293 "is_configured": true, 00:21:22.293 "data_offset": 0, 00:21:22.293 "data_size": 65536 00:21:22.293 }, 00:21:22.293 { 00:21:22.293 "name": "BaseBdev3", 00:21:22.293 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:22.293 "is_configured": true, 00:21:22.293 "data_offset": 0, 00:21:22.293 "data_size": 65536 00:21:22.293 }, 00:21:22.293 { 00:21:22.293 "name": "BaseBdev4", 00:21:22.293 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:22.293 "is_configured": true, 00:21:22.293 "data_offset": 0, 00:21:22.293 "data_size": 65536 00:21:22.293 } 00:21:22.293 ] 00:21:22.293 }' 00:21:22.293 10:46:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.293 10:46:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:22.293 10:46:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.293 10:46:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:22.293 10:46:48 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.551 [2024-07-24 10:46:48.986235] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:22.551 [2024-07-24 10:46:48.986663] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.551 [2024-07-24 10:46:48.992463] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:21:22.551 [2024-07-24 10:46:48.995027] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.551 10:46:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.544 10:46:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:23.802 "name": "raid_bdev1", 00:21:23.802 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:23.802 "strip_size_kb": 0, 00:21:23.802 "state": "online", 00:21:23.802 "raid_level": "raid1", 00:21:23.802 "superblock": false, 00:21:23.802 "num_base_bdevs": 4, 00:21:23.802 "num_base_bdevs_discovered": 4, 00:21:23.802 "num_base_bdevs_operational": 4, 00:21:23.802 "process": { 00:21:23.802 "type": "rebuild", 00:21:23.802 "target": "spare", 00:21:23.802 "progress": { 00:21:23.802 "blocks": 24576, 00:21:23.802 "percent": 37 00:21:23.802 } 00:21:23.802 }, 00:21:23.802 "base_bdevs_list": [ 00:21:23.802 { 00:21:23.802 "name": "spare", 00:21:23.802 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:23.802 "is_configured": true, 00:21:23.802 "data_offset": 0, 00:21:23.802 "data_size": 65536 00:21:23.802 }, 00:21:23.802 { 00:21:23.802 "name": "BaseBdev2", 00:21:23.802 "uuid": "ce36187a-03f3-48bc-80cc-fcbdb54390f8", 00:21:23.802 "is_configured": true, 00:21:23.802 "data_offset": 0, 00:21:23.802 "data_size": 65536 00:21:23.802 }, 00:21:23.802 { 00:21:23.802 "name": "BaseBdev3", 00:21:23.802 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:23.802 "is_configured": true, 00:21:23.802 "data_offset": 0, 00:21:23.802 "data_size": 65536 00:21:23.802 }, 00:21:23.802 { 00:21:23.802 "name": "BaseBdev4", 00:21:23.802 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:23.802 "is_configured": true, 00:21:23.802 "data_offset": 0, 00:21:23.802 "data_size": 65536 00:21:23.802 } 00:21:23.802 ] 00:21:23.802 }' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:23.802 10:46:50 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:24.060 [2024-07-24 10:46:50.584756] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:24.060 [2024-07-24 10:46:50.607271] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06220 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.060 10:46:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.317 10:46:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.317 "name": "raid_bdev1", 00:21:24.317 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:24.317 "strip_size_kb": 0, 00:21:24.317 "state": "online", 00:21:24.317 "raid_level": "raid1", 00:21:24.317 "superblock": false, 00:21:24.317 "num_base_bdevs": 4, 00:21:24.317 "num_base_bdevs_discovered": 3, 00:21:24.317 "num_base_bdevs_operational": 3, 00:21:24.317 "process": { 00:21:24.317 "type": "rebuild", 00:21:24.317 "target": "spare", 00:21:24.317 "progress": { 00:21:24.317 "blocks": 36864, 00:21:24.317 "percent": 56 00:21:24.317 } 00:21:24.317 }, 00:21:24.317 "base_bdevs_list": [ 00:21:24.317 { 00:21:24.317 "name": "spare", 00:21:24.317 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:24.317 "is_configured": true, 00:21:24.317 "data_offset": 0, 00:21:24.317 "data_size": 65536 00:21:24.317 }, 00:21:24.317 { 00:21:24.317 "name": null, 00:21:24.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.317 "is_configured": false, 00:21:24.317 "data_offset": 0, 00:21:24.317 "data_size": 65536 00:21:24.317 }, 00:21:24.317 { 00:21:24.317 "name": "BaseBdev3", 00:21:24.317 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:24.317 "is_configured": true, 00:21:24.317 "data_offset": 0, 00:21:24.317 "data_size": 65536 00:21:24.317 }, 00:21:24.317 { 00:21:24.317 "name": "BaseBdev4", 00:21:24.318 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:24.318 "is_configured": true, 00:21:24.318 "data_offset": 0, 00:21:24.318 "data_size": 65536 00:21:24.318 } 00:21:24.318 ] 00:21:24.318 }' 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@657 -- # local timeout=490 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.318 10:46:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.575 10:46:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.575 "name": "raid_bdev1", 00:21:24.575 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:24.575 "strip_size_kb": 0, 00:21:24.575 "state": "online", 00:21:24.575 "raid_level": "raid1", 00:21:24.575 "superblock": false, 00:21:24.575 "num_base_bdevs": 4, 00:21:24.575 "num_base_bdevs_discovered": 3, 00:21:24.575 "num_base_bdevs_operational": 3, 00:21:24.575 "process": { 00:21:24.575 "type": "rebuild", 00:21:24.575 "target": "spare", 00:21:24.575 "progress": { 00:21:24.575 "blocks": 45056, 00:21:24.575 "percent": 68 00:21:24.575 } 00:21:24.575 }, 00:21:24.575 "base_bdevs_list": [ 00:21:24.575 { 00:21:24.575 "name": "spare", 00:21:24.575 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:24.575 "is_configured": true, 00:21:24.575 "data_offset": 0, 00:21:24.575 "data_size": 65536 00:21:24.575 }, 00:21:24.575 { 00:21:24.575 "name": null, 00:21:24.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.575 "is_configured": false, 00:21:24.575 "data_offset": 0, 00:21:24.575 "data_size": 65536 00:21:24.575 }, 00:21:24.575 { 00:21:24.575 "name": "BaseBdev3", 00:21:24.575 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:24.575 "is_configured": true, 00:21:24.575 "data_offset": 0, 00:21:24.575 "data_size": 65536 00:21:24.575 }, 00:21:24.575 { 00:21:24.576 "name": "BaseBdev4", 00:21:24.576 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:24.576 "is_configured": true, 00:21:24.576 "data_offset": 0, 00:21:24.576 "data_size": 65536 00:21:24.576 } 00:21:24.576 ] 00:21:24.576 }' 00:21:24.576 10:46:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.833 10:46:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.833 10:46:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.833 10:46:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.833 10:46:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:25.765 [2024-07-24 10:46:52.219654] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:25.765 [2024-07-24 10:46:52.220063] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:25.765 [2024-07-24 10:46:52.220289] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.765 10:46:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.022 "name": "raid_bdev1", 00:21:26.022 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:26.022 "strip_size_kb": 0, 00:21:26.022 "state": "online", 00:21:26.022 "raid_level": "raid1", 00:21:26.022 "superblock": false, 00:21:26.022 "num_base_bdevs": 4, 00:21:26.022 "num_base_bdevs_discovered": 3, 00:21:26.022 "num_base_bdevs_operational": 3, 00:21:26.022 "base_bdevs_list": [ 00:21:26.022 { 00:21:26.022 "name": "spare", 00:21:26.022 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:26.022 "is_configured": true, 00:21:26.022 "data_offset": 0, 00:21:26.022 "data_size": 65536 00:21:26.022 }, 00:21:26.022 { 00:21:26.022 "name": null, 00:21:26.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.022 "is_configured": false, 00:21:26.022 "data_offset": 0, 00:21:26.022 "data_size": 65536 00:21:26.022 }, 00:21:26.022 { 00:21:26.022 "name": "BaseBdev3", 00:21:26.022 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:26.022 "is_configured": true, 00:21:26.022 "data_offset": 0, 00:21:26.022 "data_size": 65536 00:21:26.022 }, 00:21:26.022 { 00:21:26.022 "name": "BaseBdev4", 00:21:26.022 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:26.022 "is_configured": true, 00:21:26.022 "data_offset": 0, 00:21:26.022 "data_size": 65536 00:21:26.022 } 00:21:26.022 ] 00:21:26.022 }' 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@660 -- # break 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.022 10:46:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.280 10:46:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.280 "name": "raid_bdev1", 00:21:26.280 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:26.280 "strip_size_kb": 0, 00:21:26.280 "state": "online", 00:21:26.280 "raid_level": "raid1", 00:21:26.280 "superblock": false, 00:21:26.280 "num_base_bdevs": 4, 00:21:26.280 "num_base_bdevs_discovered": 3, 00:21:26.280 "num_base_bdevs_operational": 3, 00:21:26.280 "base_bdevs_list": [ 00:21:26.280 { 00:21:26.280 "name": "spare", 00:21:26.280 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:26.280 "is_configured": true, 00:21:26.280 "data_offset": 0, 00:21:26.280 "data_size": 65536 00:21:26.280 }, 00:21:26.280 { 00:21:26.280 "name": null, 00:21:26.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.280 "is_configured": false, 00:21:26.280 "data_offset": 0, 00:21:26.280 "data_size": 65536 00:21:26.280 }, 00:21:26.280 { 00:21:26.280 "name": "BaseBdev3", 00:21:26.280 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:26.280 "is_configured": true, 00:21:26.280 "data_offset": 0, 00:21:26.280 "data_size": 65536 00:21:26.280 }, 00:21:26.280 { 00:21:26.280 "name": "BaseBdev4", 00:21:26.280 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:26.280 "is_configured": true, 00:21:26.280 "data_offset": 0, 00:21:26.280 "data_size": 65536 00:21:26.280 } 00:21:26.280 ] 00:21:26.280 }' 00:21:26.280 10:46:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.539 10:46:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.797 10:46:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:26.797 "name": "raid_bdev1", 00:21:26.797 "uuid": "5896a173-2cf3-46a2-84b3-b85931cd45cf", 00:21:26.797 "strip_size_kb": 0, 00:21:26.797 "state": "online", 00:21:26.797 "raid_level": "raid1", 00:21:26.797 "superblock": false, 00:21:26.797 "num_base_bdevs": 4, 00:21:26.797 "num_base_bdevs_discovered": 3, 00:21:26.797 "num_base_bdevs_operational": 3, 00:21:26.797 "base_bdevs_list": [ 00:21:26.797 { 00:21:26.797 "name": "spare", 00:21:26.797 "uuid": "adc0262b-dfd2-5a0c-a9bf-94316fcb3ff9", 00:21:26.797 "is_configured": true, 00:21:26.797 "data_offset": 0, 00:21:26.797 "data_size": 65536 00:21:26.797 }, 00:21:26.797 { 00:21:26.797 "name": null, 00:21:26.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.797 "is_configured": false, 00:21:26.797 "data_offset": 0, 00:21:26.797 "data_size": 65536 00:21:26.797 }, 00:21:26.797 { 00:21:26.797 "name": "BaseBdev3", 00:21:26.797 "uuid": "19329868-dbca-4dfe-890e-171acc58b902", 00:21:26.797 "is_configured": true, 00:21:26.797 "data_offset": 0, 00:21:26.797 "data_size": 65536 00:21:26.797 }, 00:21:26.797 { 00:21:26.797 "name": "BaseBdev4", 00:21:26.797 "uuid": "959fe768-afe9-49d8-8f96-6435d353fae1", 00:21:26.797 "is_configured": true, 00:21:26.797 "data_offset": 0, 00:21:26.797 "data_size": 65536 00:21:26.797 } 00:21:26.797 ] 00:21:26.797 }' 00:21:26.797 10:46:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:26.797 10:46:53 -- common/autotest_common.sh@10 -- # set +x 00:21:27.363 10:46:54 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:27.621 [2024-07-24 10:46:54.279882] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.621 [2024-07-24 10:46:54.280276] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.621 [2024-07-24 10:46:54.280547] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.621 [2024-07-24 10:46:54.280760] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.621 [2024-07-24 10:46:54.280883] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:21:27.621 10:46:54 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.621 10:46:54 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:27.922 10:46:54 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:27.922 10:46:54 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:27.922 10:46:54 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@12 -- # local i 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:27.922 10:46:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:28.180 /dev/nbd0 00:21:28.180 10:46:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:28.180 10:46:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:28.180 10:46:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:28.180 10:46:54 -- common/autotest_common.sh@857 -- # local i 00:21:28.180 10:46:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:28.180 10:46:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:28.180 10:46:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:28.438 10:46:54 -- common/autotest_common.sh@861 -- # break 00:21:28.438 10:46:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:28.438 10:46:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:28.438 10:46:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.438 1+0 records in 00:21:28.438 1+0 records out 00:21:28.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576796 s, 7.1 MB/s 00:21:28.438 10:46:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.438 10:46:54 -- common/autotest_common.sh@874 -- # size=4096 00:21:28.438 10:46:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.438 10:46:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:28.438 10:46:54 -- common/autotest_common.sh@877 -- # return 0 00:21:28.438 10:46:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.438 10:46:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:28.438 10:46:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:28.438 /dev/nbd1 00:21:28.696 10:46:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:28.696 10:46:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:28.696 10:46:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:28.696 10:46:55 -- common/autotest_common.sh@857 -- # local i 00:21:28.697 10:46:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:28.697 10:46:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:28.697 10:46:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:28.697 10:46:55 -- common/autotest_common.sh@861 -- # break 00:21:28.697 10:46:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:28.697 10:46:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:28.697 10:46:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.697 1+0 records in 00:21:28.697 1+0 records out 00:21:28.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373404 s, 11.0 MB/s 00:21:28.697 10:46:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.697 10:46:55 -- common/autotest_common.sh@874 -- # size=4096 00:21:28.697 10:46:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.697 10:46:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:28.697 10:46:55 -- common/autotest_common.sh@877 -- # return 0 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:28.697 10:46:55 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:28.697 10:46:55 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@51 -- # local i 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.697 10:46:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@41 -- # break 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@45 -- # return 0 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:28.955 10:46:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:29.213 10:46:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.213 10:46:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.213 10:46:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.213 10:46:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.214 10:46:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.214 10:46:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.214 10:46:55 -- bdev/nbd_common.sh@41 -- # break 00:21:29.214 10:46:55 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.214 10:46:55 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:29.214 10:46:55 -- bdev/bdev_raid.sh@709 -- # killprocess 135569 00:21:29.214 10:46:55 -- common/autotest_common.sh@926 -- # '[' -z 135569 ']' 00:21:29.214 10:46:55 -- common/autotest_common.sh@930 -- # kill -0 135569 00:21:29.214 10:46:55 -- common/autotest_common.sh@931 -- # uname 00:21:29.214 10:46:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:29.214 10:46:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135569 00:21:29.214 10:46:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:29.214 10:46:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:29.214 10:46:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135569' 00:21:29.214 killing process with pid 135569 00:21:29.214 10:46:55 -- common/autotest_common.sh@945 -- # kill 135569 00:21:29.214 Received shutdown signal, test time was about 60.000000 seconds 00:21:29.214 00:21:29.214 Latency(us) 00:21:29.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.214 =================================================================================================================== 00:21:29.214 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:29.214 [2024-07-24 10:46:55.792693] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:29.214 10:46:55 -- common/autotest_common.sh@950 -- # wait 135569 00:21:29.214 [2024-07-24 10:46:55.888644] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:29.780 00:21:29.780 real 0m23.083s 00:21:29.780 user 0m31.497s 00:21:29.780 sys 0m5.279s 00:21:29.780 10:46:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:29.780 10:46:56 -- common/autotest_common.sh@10 -- # set +x 00:21:29.780 ************************************ 00:21:29.780 END TEST raid_rebuild_test 00:21:29.780 ************************************ 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:29.780 10:46:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:29.780 10:46:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:29.780 10:46:56 -- common/autotest_common.sh@10 -- # set +x 00:21:29.780 ************************************ 00:21:29.780 START TEST raid_rebuild_test_sb 00:21:29.780 ************************************ 00:21:29.780 10:46:56 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@544 -- # raid_pid=136118 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:29.780 10:46:56 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136118 /var/tmp/spdk-raid.sock 00:21:29.780 10:46:56 -- common/autotest_common.sh@819 -- # '[' -z 136118 ']' 00:21:29.780 10:46:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:29.780 10:46:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:29.780 10:46:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:29.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:29.780 10:46:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:29.780 10:46:56 -- common/autotest_common.sh@10 -- # set +x 00:21:29.780 [2024-07-24 10:46:56.428241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:29.780 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:29.780 Zero copy mechanism will not be used. 00:21:29.780 [2024-07-24 10:46:56.428506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136118 ] 00:21:30.039 [2024-07-24 10:46:56.575256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.039 [2024-07-24 10:46:56.703364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.297 [2024-07-24 10:46:56.777775] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:30.863 10:46:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:30.863 10:46:57 -- common/autotest_common.sh@852 -- # return 0 00:21:30.863 10:46:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:30.863 10:46:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:30.863 10:46:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:31.122 BaseBdev1_malloc 00:21:31.122 10:46:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:31.380 [2024-07-24 10:46:57.901387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:31.380 [2024-07-24 10:46:57.901575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.380 [2024-07-24 10:46:57.901635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:21:31.380 [2024-07-24 10:46:57.901696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.380 [2024-07-24 10:46:57.904630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.380 [2024-07-24 10:46:57.904714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:31.380 BaseBdev1 00:21:31.380 10:46:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:31.380 10:46:57 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:31.380 10:46:57 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:31.638 BaseBdev2_malloc 00:21:31.638 10:46:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:31.896 [2024-07-24 10:46:58.395652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:31.896 [2024-07-24 10:46:58.395812] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:31.896 [2024-07-24 10:46:58.395863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:21:31.896 [2024-07-24 10:46:58.395947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:31.896 [2024-07-24 10:46:58.398816] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:31.896 [2024-07-24 10:46:58.398891] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:31.896 BaseBdev2 00:21:31.896 10:46:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:31.896 10:46:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:31.896 10:46:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:32.173 BaseBdev3_malloc 00:21:32.173 10:46:58 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:32.431 [2024-07-24 10:46:58.968340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:32.431 [2024-07-24 10:46:58.968480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.431 [2024-07-24 10:46:58.968535] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:32.431 [2024-07-24 10:46:58.968614] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.431 [2024-07-24 10:46:58.971580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.431 [2024-07-24 10:46:58.971657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:32.431 BaseBdev3 00:21:32.431 10:46:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.431 10:46:58 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.431 10:46:58 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:32.689 BaseBdev4_malloc 00:21:32.689 10:46:59 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:32.948 [2024-07-24 10:46:59.447159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:32.948 [2024-07-24 10:46:59.447371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.948 [2024-07-24 10:46:59.447418] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:32.948 [2024-07-24 10:46:59.447490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.948 [2024-07-24 10:46:59.450435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.948 [2024-07-24 10:46:59.450550] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:32.948 BaseBdev4 00:21:32.948 10:46:59 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:33.205 spare_malloc 00:21:33.205 10:46:59 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:33.463 spare_delay 00:21:33.463 10:47:00 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:33.721 [2024-07-24 10:47:00.246354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:33.721 [2024-07-24 10:47:00.246543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.721 [2024-07-24 10:47:00.246591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:33.721 [2024-07-24 10:47:00.246643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.721 [2024-07-24 10:47:00.249688] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.721 [2024-07-24 10:47:00.249777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:33.721 spare 00:21:33.721 10:47:00 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:33.978 [2024-07-24 10:47:00.470548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:33.978 [2024-07-24 10:47:00.473155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:33.978 [2024-07-24 10:47:00.473246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:33.978 [2024-07-24 10:47:00.473325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:33.978 [2024-07-24 10:47:00.473623] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:33.978 [2024-07-24 10:47:00.473639] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:33.978 [2024-07-24 10:47:00.473821] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:21:33.978 [2024-07-24 10:47:00.474306] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:33.978 [2024-07-24 10:47:00.474332] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:33.978 [2024-07-24 10:47:00.474556] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.978 10:47:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.236 10:47:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.236 "name": "raid_bdev1", 00:21:34.236 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:34.236 "strip_size_kb": 0, 00:21:34.236 "state": "online", 00:21:34.236 "raid_level": "raid1", 00:21:34.236 "superblock": true, 00:21:34.236 "num_base_bdevs": 4, 00:21:34.236 "num_base_bdevs_discovered": 4, 00:21:34.236 "num_base_bdevs_operational": 4, 00:21:34.236 "base_bdevs_list": [ 00:21:34.236 { 00:21:34.236 "name": "BaseBdev1", 00:21:34.236 "uuid": "c2ddbdb2-c693-5201-925c-7d49723b103b", 00:21:34.236 "is_configured": true, 00:21:34.236 "data_offset": 2048, 00:21:34.236 "data_size": 63488 00:21:34.236 }, 00:21:34.236 { 00:21:34.236 "name": "BaseBdev2", 00:21:34.236 "uuid": "3ee68519-3ca9-5931-af34-c22ad7b8aaca", 00:21:34.236 "is_configured": true, 00:21:34.236 "data_offset": 2048, 00:21:34.236 "data_size": 63488 00:21:34.236 }, 00:21:34.236 { 00:21:34.236 "name": "BaseBdev3", 00:21:34.236 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:34.236 "is_configured": true, 00:21:34.236 "data_offset": 2048, 00:21:34.236 "data_size": 63488 00:21:34.236 }, 00:21:34.236 { 00:21:34.236 "name": "BaseBdev4", 00:21:34.236 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:34.236 "is_configured": true, 00:21:34.236 "data_offset": 2048, 00:21:34.236 "data_size": 63488 00:21:34.236 } 00:21:34.236 ] 00:21:34.236 }' 00:21:34.236 10:47:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.236 10:47:00 -- common/autotest_common.sh@10 -- # set +x 00:21:34.801 10:47:01 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:34.801 10:47:01 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:35.059 [2024-07-24 10:47:01.603142] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.059 10:47:01 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:35.059 10:47:01 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.059 10:47:01 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:35.355 10:47:01 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:35.355 10:47:01 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:35.355 10:47:01 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:35.355 10:47:01 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@12 -- # local i 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.355 10:47:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:35.612 [2024-07-24 10:47:02.127092] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:21:35.612 /dev/nbd0 00:21:35.612 10:47:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:35.612 10:47:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:35.612 10:47:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:35.612 10:47:02 -- common/autotest_common.sh@857 -- # local i 00:21:35.612 10:47:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:35.612 10:47:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:35.612 10:47:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:35.612 10:47:02 -- common/autotest_common.sh@861 -- # break 00:21:35.612 10:47:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:35.612 10:47:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:35.612 10:47:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:35.612 1+0 records in 00:21:35.612 1+0 records out 00:21:35.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421118 s, 9.7 MB/s 00:21:35.612 10:47:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.612 10:47:02 -- common/autotest_common.sh@874 -- # size=4096 00:21:35.612 10:47:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:35.612 10:47:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:35.612 10:47:02 -- common/autotest_common.sh@877 -- # return 0 00:21:35.612 10:47:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:35.612 10:47:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.612 10:47:02 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:35.612 10:47:02 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:35.612 10:47:02 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:42.174 63488+0 records in 00:21:42.174 63488+0 records out 00:21:42.174 32505856 bytes (33 MB, 31 MiB) copied, 6.38656 s, 5.1 MB/s 00:21:42.174 10:47:08 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:42.174 10:47:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:42.174 10:47:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:42.174 10:47:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.174 10:47:08 -- bdev/nbd_common.sh@51 -- # local i 00:21:42.174 10:47:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.174 10:47:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:42.432 [2024-07-24 10:47:08.879851] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@41 -- # break 00:21:42.432 10:47:08 -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.432 10:47:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:42.432 [2024-07-24 10:47:09.115509] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.690 "name": "raid_bdev1", 00:21:42.690 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:42.690 "strip_size_kb": 0, 00:21:42.690 "state": "online", 00:21:42.690 "raid_level": "raid1", 00:21:42.690 "superblock": true, 00:21:42.690 "num_base_bdevs": 4, 00:21:42.690 "num_base_bdevs_discovered": 3, 00:21:42.690 "num_base_bdevs_operational": 3, 00:21:42.690 "base_bdevs_list": [ 00:21:42.690 { 00:21:42.690 "name": null, 00:21:42.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.690 "is_configured": false, 00:21:42.690 "data_offset": 2048, 00:21:42.690 "data_size": 63488 00:21:42.690 }, 00:21:42.690 { 00:21:42.690 "name": "BaseBdev2", 00:21:42.690 "uuid": "3ee68519-3ca9-5931-af34-c22ad7b8aaca", 00:21:42.690 "is_configured": true, 00:21:42.690 "data_offset": 2048, 00:21:42.690 "data_size": 63488 00:21:42.690 }, 00:21:42.690 { 00:21:42.690 "name": "BaseBdev3", 00:21:42.690 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:42.690 "is_configured": true, 00:21:42.690 "data_offset": 2048, 00:21:42.690 "data_size": 63488 00:21:42.690 }, 00:21:42.690 { 00:21:42.690 "name": "BaseBdev4", 00:21:42.690 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:42.690 "is_configured": true, 00:21:42.690 "data_offset": 2048, 00:21:42.690 "data_size": 63488 00:21:42.690 } 00:21:42.690 ] 00:21:42.690 }' 00:21:42.690 10:47:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.690 10:47:09 -- common/autotest_common.sh@10 -- # set +x 00:21:43.625 10:47:10 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.625 [2024-07-24 10:47:10.279891] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:43.625 [2024-07-24 10:47:10.279975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.625 [2024-07-24 10:47:10.286030] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:21:43.625 [2024-07-24 10:47:10.288637] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:43.625 10:47:10 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.000 "name": "raid_bdev1", 00:21:45.000 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:45.000 "strip_size_kb": 0, 00:21:45.000 "state": "online", 00:21:45.000 "raid_level": "raid1", 00:21:45.000 "superblock": true, 00:21:45.000 "num_base_bdevs": 4, 00:21:45.000 "num_base_bdevs_discovered": 4, 00:21:45.000 "num_base_bdevs_operational": 4, 00:21:45.000 "process": { 00:21:45.000 "type": "rebuild", 00:21:45.000 "target": "spare", 00:21:45.000 "progress": { 00:21:45.000 "blocks": 24576, 00:21:45.000 "percent": 38 00:21:45.000 } 00:21:45.000 }, 00:21:45.000 "base_bdevs_list": [ 00:21:45.000 { 00:21:45.000 "name": "spare", 00:21:45.000 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:45.000 "is_configured": true, 00:21:45.000 "data_offset": 2048, 00:21:45.000 "data_size": 63488 00:21:45.000 }, 00:21:45.000 { 00:21:45.000 "name": "BaseBdev2", 00:21:45.000 "uuid": "3ee68519-3ca9-5931-af34-c22ad7b8aaca", 00:21:45.000 "is_configured": true, 00:21:45.000 "data_offset": 2048, 00:21:45.000 "data_size": 63488 00:21:45.000 }, 00:21:45.000 { 00:21:45.000 "name": "BaseBdev3", 00:21:45.000 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:45.000 "is_configured": true, 00:21:45.000 "data_offset": 2048, 00:21:45.000 "data_size": 63488 00:21:45.000 }, 00:21:45.000 { 00:21:45.000 "name": "BaseBdev4", 00:21:45.000 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:45.000 "is_configured": true, 00:21:45.000 "data_offset": 2048, 00:21:45.000 "data_size": 63488 00:21:45.000 } 00:21:45.000 ] 00:21:45.000 }' 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.000 10:47:11 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:45.286 [2024-07-24 10:47:11.923942] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.545 [2024-07-24 10:47:12.003292] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:45.545 [2024-07-24 10:47:12.003473] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.545 10:47:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.803 10:47:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.803 "name": "raid_bdev1", 00:21:45.803 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:45.803 "strip_size_kb": 0, 00:21:45.803 "state": "online", 00:21:45.803 "raid_level": "raid1", 00:21:45.803 "superblock": true, 00:21:45.803 "num_base_bdevs": 4, 00:21:45.803 "num_base_bdevs_discovered": 3, 00:21:45.803 "num_base_bdevs_operational": 3, 00:21:45.803 "base_bdevs_list": [ 00:21:45.803 { 00:21:45.803 "name": null, 00:21:45.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.803 "is_configured": false, 00:21:45.803 "data_offset": 2048, 00:21:45.803 "data_size": 63488 00:21:45.803 }, 00:21:45.803 { 00:21:45.803 "name": "BaseBdev2", 00:21:45.803 "uuid": "3ee68519-3ca9-5931-af34-c22ad7b8aaca", 00:21:45.803 "is_configured": true, 00:21:45.803 "data_offset": 2048, 00:21:45.803 "data_size": 63488 00:21:45.803 }, 00:21:45.803 { 00:21:45.803 "name": "BaseBdev3", 00:21:45.803 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:45.803 "is_configured": true, 00:21:45.803 "data_offset": 2048, 00:21:45.803 "data_size": 63488 00:21:45.803 }, 00:21:45.803 { 00:21:45.803 "name": "BaseBdev4", 00:21:45.803 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:45.803 "is_configured": true, 00:21:45.803 "data_offset": 2048, 00:21:45.803 "data_size": 63488 00:21:45.803 } 00:21:45.803 ] 00:21:45.803 }' 00:21:45.803 10:47:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.803 10:47:12 -- common/autotest_common.sh@10 -- # set +x 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.369 10:47:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.626 10:47:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:46.626 "name": "raid_bdev1", 00:21:46.626 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:46.626 "strip_size_kb": 0, 00:21:46.626 "state": "online", 00:21:46.626 "raid_level": "raid1", 00:21:46.626 "superblock": true, 00:21:46.626 "num_base_bdevs": 4, 00:21:46.626 "num_base_bdevs_discovered": 3, 00:21:46.626 "num_base_bdevs_operational": 3, 00:21:46.626 "base_bdevs_list": [ 00:21:46.626 { 00:21:46.626 "name": null, 00:21:46.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.627 "is_configured": false, 00:21:46.627 "data_offset": 2048, 00:21:46.627 "data_size": 63488 00:21:46.627 }, 00:21:46.627 { 00:21:46.627 "name": "BaseBdev2", 00:21:46.627 "uuid": "3ee68519-3ca9-5931-af34-c22ad7b8aaca", 00:21:46.627 "is_configured": true, 00:21:46.627 "data_offset": 2048, 00:21:46.627 "data_size": 63488 00:21:46.627 }, 00:21:46.627 { 00:21:46.627 "name": "BaseBdev3", 00:21:46.627 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:46.627 "is_configured": true, 00:21:46.627 "data_offset": 2048, 00:21:46.627 "data_size": 63488 00:21:46.627 }, 00:21:46.627 { 00:21:46.627 "name": "BaseBdev4", 00:21:46.627 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:46.627 "is_configured": true, 00:21:46.627 "data_offset": 2048, 00:21:46.627 "data_size": 63488 00:21:46.627 } 00:21:46.627 ] 00:21:46.627 }' 00:21:46.627 10:47:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:46.883 10:47:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:46.883 10:47:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:46.883 10:47:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:46.883 10:47:13 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:47.141 [2024-07-24 10:47:13.643254] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:47.141 [2024-07-24 10:47:13.643351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:47.141 [2024-07-24 10:47:13.649411] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e5c0 00:21:47.141 [2024-07-24 10:47:13.651969] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:47.141 10:47:13 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.075 10:47:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.333 10:47:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.333 "name": "raid_bdev1", 00:21:48.333 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:48.333 "strip_size_kb": 0, 00:21:48.333 "state": "online", 00:21:48.333 "raid_level": "raid1", 00:21:48.333 "superblock": true, 00:21:48.333 "num_base_bdevs": 4, 00:21:48.333 "num_base_bdevs_discovered": 4, 00:21:48.333 "num_base_bdevs_operational": 4, 00:21:48.333 "process": { 00:21:48.333 "type": "rebuild", 00:21:48.333 "target": "spare", 00:21:48.333 "progress": { 00:21:48.333 "blocks": 24576, 00:21:48.333 "percent": 38 00:21:48.333 } 00:21:48.333 }, 00:21:48.333 "base_bdevs_list": [ 00:21:48.333 { 00:21:48.333 "name": "spare", 00:21:48.333 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:48.333 "is_configured": true, 00:21:48.333 "data_offset": 2048, 00:21:48.333 "data_size": 63488 00:21:48.333 }, 00:21:48.333 { 00:21:48.333 "name": "BaseBdev2", 00:21:48.333 "uuid": "3ee68519-3ca9-5931-af34-c22ad7b8aaca", 00:21:48.334 "is_configured": true, 00:21:48.334 "data_offset": 2048, 00:21:48.334 "data_size": 63488 00:21:48.334 }, 00:21:48.334 { 00:21:48.334 "name": "BaseBdev3", 00:21:48.334 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:48.334 "is_configured": true, 00:21:48.334 "data_offset": 2048, 00:21:48.334 "data_size": 63488 00:21:48.334 }, 00:21:48.334 { 00:21:48.334 "name": "BaseBdev4", 00:21:48.334 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:48.334 "is_configured": true, 00:21:48.334 "data_offset": 2048, 00:21:48.334 "data_size": 63488 00:21:48.334 } 00:21:48.334 ] 00:21:48.334 }' 00:21:48.334 10:47:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.334 10:47:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.334 10:47:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:48.592 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:48.592 10:47:15 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:48.592 [2024-07-24 10:47:15.266676] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:48.850 [2024-07-24 10:47:15.365375] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e5c0 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.850 10:47:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.108 10:47:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.108 "name": "raid_bdev1", 00:21:49.108 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:49.108 "strip_size_kb": 0, 00:21:49.108 "state": "online", 00:21:49.108 "raid_level": "raid1", 00:21:49.108 "superblock": true, 00:21:49.108 "num_base_bdevs": 4, 00:21:49.108 "num_base_bdevs_discovered": 3, 00:21:49.108 "num_base_bdevs_operational": 3, 00:21:49.108 "process": { 00:21:49.108 "type": "rebuild", 00:21:49.108 "target": "spare", 00:21:49.108 "progress": { 00:21:49.108 "blocks": 40960, 00:21:49.108 "percent": 64 00:21:49.108 } 00:21:49.108 }, 00:21:49.108 "base_bdevs_list": [ 00:21:49.108 { 00:21:49.108 "name": "spare", 00:21:49.108 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:49.108 "is_configured": true, 00:21:49.108 "data_offset": 2048, 00:21:49.108 "data_size": 63488 00:21:49.108 }, 00:21:49.108 { 00:21:49.108 "name": null, 00:21:49.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.108 "is_configured": false, 00:21:49.108 "data_offset": 2048, 00:21:49.108 "data_size": 63488 00:21:49.108 }, 00:21:49.108 { 00:21:49.108 "name": "BaseBdev3", 00:21:49.108 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:49.108 "is_configured": true, 00:21:49.108 "data_offset": 2048, 00:21:49.108 "data_size": 63488 00:21:49.109 }, 00:21:49.109 { 00:21:49.109 "name": "BaseBdev4", 00:21:49.109 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:49.109 "is_configured": true, 00:21:49.109 "data_offset": 2048, 00:21:49.109 "data_size": 63488 00:21:49.109 } 00:21:49.109 ] 00:21:49.109 }' 00:21:49.109 10:47:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@657 -- # local timeout=515 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.367 10:47:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.625 10:47:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.625 "name": "raid_bdev1", 00:21:49.625 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:49.625 "strip_size_kb": 0, 00:21:49.625 "state": "online", 00:21:49.625 "raid_level": "raid1", 00:21:49.625 "superblock": true, 00:21:49.625 "num_base_bdevs": 4, 00:21:49.625 "num_base_bdevs_discovered": 3, 00:21:49.625 "num_base_bdevs_operational": 3, 00:21:49.625 "process": { 00:21:49.625 "type": "rebuild", 00:21:49.625 "target": "spare", 00:21:49.625 "progress": { 00:21:49.626 "blocks": 49152, 00:21:49.626 "percent": 77 00:21:49.626 } 00:21:49.626 }, 00:21:49.626 "base_bdevs_list": [ 00:21:49.626 { 00:21:49.626 "name": "spare", 00:21:49.626 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:49.626 "is_configured": true, 00:21:49.626 "data_offset": 2048, 00:21:49.626 "data_size": 63488 00:21:49.626 }, 00:21:49.626 { 00:21:49.626 "name": null, 00:21:49.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.626 "is_configured": false, 00:21:49.626 "data_offset": 2048, 00:21:49.626 "data_size": 63488 00:21:49.626 }, 00:21:49.626 { 00:21:49.626 "name": "BaseBdev3", 00:21:49.626 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:49.626 "is_configured": true, 00:21:49.626 "data_offset": 2048, 00:21:49.626 "data_size": 63488 00:21:49.626 }, 00:21:49.626 { 00:21:49.626 "name": "BaseBdev4", 00:21:49.626 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:49.626 "is_configured": true, 00:21:49.626 "data_offset": 2048, 00:21:49.626 "data_size": 63488 00:21:49.626 } 00:21:49.626 ] 00:21:49.626 }' 00:21:49.626 10:47:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.626 10:47:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.626 10:47:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.626 10:47:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.626 10:47:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:50.218 [2024-07-24 10:47:16.777127] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:50.218 [2024-07-24 10:47:16.777270] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:50.218 [2024-07-24 10:47:16.777502] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.783 10:47:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:51.041 "name": "raid_bdev1", 00:21:51.041 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:51.041 "strip_size_kb": 0, 00:21:51.041 "state": "online", 00:21:51.041 "raid_level": "raid1", 00:21:51.041 "superblock": true, 00:21:51.041 "num_base_bdevs": 4, 00:21:51.041 "num_base_bdevs_discovered": 3, 00:21:51.041 "num_base_bdevs_operational": 3, 00:21:51.041 "base_bdevs_list": [ 00:21:51.041 { 00:21:51.041 "name": "spare", 00:21:51.041 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:51.041 "is_configured": true, 00:21:51.041 "data_offset": 2048, 00:21:51.041 "data_size": 63488 00:21:51.041 }, 00:21:51.041 { 00:21:51.041 "name": null, 00:21:51.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.041 "is_configured": false, 00:21:51.041 "data_offset": 2048, 00:21:51.041 "data_size": 63488 00:21:51.041 }, 00:21:51.041 { 00:21:51.041 "name": "BaseBdev3", 00:21:51.041 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:51.041 "is_configured": true, 00:21:51.041 "data_offset": 2048, 00:21:51.041 "data_size": 63488 00:21:51.041 }, 00:21:51.041 { 00:21:51.041 "name": "BaseBdev4", 00:21:51.041 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:51.041 "is_configured": true, 00:21:51.041 "data_offset": 2048, 00:21:51.041 "data_size": 63488 00:21:51.041 } 00:21:51.041 ] 00:21:51.041 }' 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@660 -- # break 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.041 10:47:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:51.300 "name": "raid_bdev1", 00:21:51.300 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:51.300 "strip_size_kb": 0, 00:21:51.300 "state": "online", 00:21:51.300 "raid_level": "raid1", 00:21:51.300 "superblock": true, 00:21:51.300 "num_base_bdevs": 4, 00:21:51.300 "num_base_bdevs_discovered": 3, 00:21:51.300 "num_base_bdevs_operational": 3, 00:21:51.300 "base_bdevs_list": [ 00:21:51.300 { 00:21:51.300 "name": "spare", 00:21:51.300 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:51.300 "is_configured": true, 00:21:51.300 "data_offset": 2048, 00:21:51.300 "data_size": 63488 00:21:51.300 }, 00:21:51.300 { 00:21:51.300 "name": null, 00:21:51.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.300 "is_configured": false, 00:21:51.300 "data_offset": 2048, 00:21:51.300 "data_size": 63488 00:21:51.300 }, 00:21:51.300 { 00:21:51.300 "name": "BaseBdev3", 00:21:51.300 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:51.300 "is_configured": true, 00:21:51.300 "data_offset": 2048, 00:21:51.300 "data_size": 63488 00:21:51.300 }, 00:21:51.300 { 00:21:51.300 "name": "BaseBdev4", 00:21:51.300 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:51.300 "is_configured": true, 00:21:51.300 "data_offset": 2048, 00:21:51.300 "data_size": 63488 00:21:51.300 } 00:21:51.300 ] 00:21:51.300 }' 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.300 10:47:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.871 10:47:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.871 "name": "raid_bdev1", 00:21:51.871 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:51.871 "strip_size_kb": 0, 00:21:51.871 "state": "online", 00:21:51.871 "raid_level": "raid1", 00:21:51.871 "superblock": true, 00:21:51.871 "num_base_bdevs": 4, 00:21:51.871 "num_base_bdevs_discovered": 3, 00:21:51.871 "num_base_bdevs_operational": 3, 00:21:51.871 "base_bdevs_list": [ 00:21:51.871 { 00:21:51.871 "name": "spare", 00:21:51.871 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:51.871 "is_configured": true, 00:21:51.871 "data_offset": 2048, 00:21:51.871 "data_size": 63488 00:21:51.871 }, 00:21:51.871 { 00:21:51.871 "name": null, 00:21:51.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.871 "is_configured": false, 00:21:51.871 "data_offset": 2048, 00:21:51.871 "data_size": 63488 00:21:51.871 }, 00:21:51.871 { 00:21:51.871 "name": "BaseBdev3", 00:21:51.871 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:51.871 "is_configured": true, 00:21:51.871 "data_offset": 2048, 00:21:51.871 "data_size": 63488 00:21:51.871 }, 00:21:51.871 { 00:21:51.871 "name": "BaseBdev4", 00:21:51.871 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:51.871 "is_configured": true, 00:21:51.871 "data_offset": 2048, 00:21:51.871 "data_size": 63488 00:21:51.871 } 00:21:51.871 ] 00:21:51.871 }' 00:21:51.871 10:47:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.871 10:47:18 -- common/autotest_common.sh@10 -- # set +x 00:21:52.437 10:47:18 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:52.695 [2024-07-24 10:47:19.210203] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.695 [2024-07-24 10:47:19.210268] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.695 [2024-07-24 10:47:19.210417] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.695 [2024-07-24 10:47:19.210536] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.695 [2024-07-24 10:47:19.210550] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:52.695 10:47:19 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.695 10:47:19 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:52.953 10:47:19 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:52.953 10:47:19 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:52.953 10:47:19 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@12 -- # local i 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:52.953 10:47:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:53.218 /dev/nbd0 00:21:53.218 10:47:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:53.218 10:47:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:53.218 10:47:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:53.218 10:47:19 -- common/autotest_common.sh@857 -- # local i 00:21:53.218 10:47:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:53.218 10:47:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:53.218 10:47:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:53.218 10:47:19 -- common/autotest_common.sh@861 -- # break 00:21:53.218 10:47:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:53.218 10:47:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:53.218 10:47:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:53.218 1+0 records in 00:21:53.218 1+0 records out 00:21:53.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590579 s, 6.9 MB/s 00:21:53.218 10:47:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:53.218 10:47:19 -- common/autotest_common.sh@874 -- # size=4096 00:21:53.218 10:47:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:53.218 10:47:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:53.218 10:47:19 -- common/autotest_common.sh@877 -- # return 0 00:21:53.218 10:47:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:53.218 10:47:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:53.218 10:47:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:53.476 /dev/nbd1 00:21:53.476 10:47:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:53.476 10:47:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:53.476 10:47:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:53.476 10:47:20 -- common/autotest_common.sh@857 -- # local i 00:21:53.476 10:47:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:53.476 10:47:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:53.476 10:47:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:53.476 10:47:20 -- common/autotest_common.sh@861 -- # break 00:21:53.476 10:47:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:53.476 10:47:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:53.476 10:47:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:53.476 1+0 records in 00:21:53.476 1+0 records out 00:21:53.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492116 s, 8.3 MB/s 00:21:53.476 10:47:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:53.476 10:47:20 -- common/autotest_common.sh@874 -- # size=4096 00:21:53.476 10:47:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:53.476 10:47:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:53.476 10:47:20 -- common/autotest_common.sh@877 -- # return 0 00:21:53.476 10:47:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:53.476 10:47:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:53.476 10:47:20 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:53.733 10:47:20 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:53.733 10:47:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:53.733 10:47:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:53.733 10:47:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:53.733 10:47:20 -- bdev/nbd_common.sh@51 -- # local i 00:21:53.733 10:47:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:53.733 10:47:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@41 -- # break 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@45 -- # return 0 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:53.993 10:47:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@41 -- # break 00:21:54.253 10:47:20 -- bdev/nbd_common.sh@45 -- # return 0 00:21:54.253 10:47:20 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:54.253 10:47:20 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:54.253 10:47:20 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:54.253 10:47:20 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:54.510 10:47:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:54.768 [2024-07-24 10:47:21.367235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:54.768 [2024-07-24 10:47:21.367411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.768 [2024-07-24 10:47:21.367466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:54.768 [2024-07-24 10:47:21.367493] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.768 [2024-07-24 10:47:21.370319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.768 [2024-07-24 10:47:21.370406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:54.768 [2024-07-24 10:47:21.370525] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:54.768 [2024-07-24 10:47:21.370627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.768 BaseBdev1 00:21:54.768 10:47:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:54.768 10:47:21 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:54.768 10:47:21 -- bdev/bdev_raid.sh@696 -- # continue 00:21:54.768 10:47:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:54.768 10:47:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:54.768 10:47:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:55.025 10:47:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:55.283 [2024-07-24 10:47:21.907351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:55.283 [2024-07-24 10:47:21.907502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.283 [2024-07-24 10:47:21.907572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:55.283 [2024-07-24 10:47:21.907604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.283 [2024-07-24 10:47:21.908120] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.283 [2024-07-24 10:47:21.908193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:55.283 [2024-07-24 10:47:21.908311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:55.283 [2024-07-24 10:47:21.908328] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:55.283 [2024-07-24 10:47:21.908336] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.283 [2024-07-24 10:47:21.908369] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:21:55.283 [2024-07-24 10:47:21.908429] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:55.283 BaseBdev3 00:21:55.283 10:47:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:55.283 10:47:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:55.283 10:47:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:55.541 10:47:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:55.799 [2024-07-24 10:47:22.427467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:55.799 [2024-07-24 10:47:22.427680] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.799 [2024-07-24 10:47:22.427736] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:55.799 [2024-07-24 10:47:22.427768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.799 [2024-07-24 10:47:22.428405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.799 [2024-07-24 10:47:22.428489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:55.799 [2024-07-24 10:47:22.428591] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:55.799 [2024-07-24 10:47:22.428629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:55.799 BaseBdev4 00:21:55.799 10:47:22 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:56.057 10:47:22 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:56.315 [2024-07-24 10:47:22.903623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:56.316 [2024-07-24 10:47:22.903778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.316 [2024-07-24 10:47:22.903825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:56.316 [2024-07-24 10:47:22.903859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.316 [2024-07-24 10:47:22.904440] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.316 [2024-07-24 10:47:22.904533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:56.316 [2024-07-24 10:47:22.904666] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:56.316 [2024-07-24 10:47:22.904716] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:56.316 spare 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.316 10:47:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.574 [2024-07-24 10:47:23.004893] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:21:56.574 [2024-07-24 10:47:23.004952] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:56.574 [2024-07-24 10:47:23.005167] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caf0b0 00:21:56.574 [2024-07-24 10:47:23.005742] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:21:56.574 [2024-07-24 10:47:23.005765] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:21:56.574 [2024-07-24 10:47:23.005962] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.574 10:47:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:56.574 "name": "raid_bdev1", 00:21:56.574 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:56.574 "strip_size_kb": 0, 00:21:56.574 "state": "online", 00:21:56.574 "raid_level": "raid1", 00:21:56.574 "superblock": true, 00:21:56.574 "num_base_bdevs": 4, 00:21:56.574 "num_base_bdevs_discovered": 3, 00:21:56.574 "num_base_bdevs_operational": 3, 00:21:56.574 "base_bdevs_list": [ 00:21:56.574 { 00:21:56.574 "name": "spare", 00:21:56.574 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:56.574 "is_configured": true, 00:21:56.574 "data_offset": 2048, 00:21:56.574 "data_size": 63488 00:21:56.574 }, 00:21:56.574 { 00:21:56.574 "name": null, 00:21:56.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.574 "is_configured": false, 00:21:56.574 "data_offset": 2048, 00:21:56.574 "data_size": 63488 00:21:56.574 }, 00:21:56.574 { 00:21:56.574 "name": "BaseBdev3", 00:21:56.574 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:56.574 "is_configured": true, 00:21:56.574 "data_offset": 2048, 00:21:56.574 "data_size": 63488 00:21:56.574 }, 00:21:56.574 { 00:21:56.574 "name": "BaseBdev4", 00:21:56.574 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:56.574 "is_configured": true, 00:21:56.574 "data_offset": 2048, 00:21:56.575 "data_size": 63488 00:21:56.575 } 00:21:56.575 ] 00:21:56.575 }' 00:21:56.575 10:47:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:56.575 10:47:23 -- common/autotest_common.sh@10 -- # set +x 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.512 10:47:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.512 10:47:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:57.512 "name": "raid_bdev1", 00:21:57.512 "uuid": "145de930-6da9-4037-adc9-96d33c45746d", 00:21:57.512 "strip_size_kb": 0, 00:21:57.512 "state": "online", 00:21:57.512 "raid_level": "raid1", 00:21:57.512 "superblock": true, 00:21:57.512 "num_base_bdevs": 4, 00:21:57.512 "num_base_bdevs_discovered": 3, 00:21:57.512 "num_base_bdevs_operational": 3, 00:21:57.512 "base_bdevs_list": [ 00:21:57.512 { 00:21:57.512 "name": "spare", 00:21:57.512 "uuid": "efc04135-954a-587d-8051-e1b3d24cd3f4", 00:21:57.512 "is_configured": true, 00:21:57.512 "data_offset": 2048, 00:21:57.512 "data_size": 63488 00:21:57.512 }, 00:21:57.512 { 00:21:57.512 "name": null, 00:21:57.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.512 "is_configured": false, 00:21:57.512 "data_offset": 2048, 00:21:57.512 "data_size": 63488 00:21:57.512 }, 00:21:57.512 { 00:21:57.512 "name": "BaseBdev3", 00:21:57.512 "uuid": "eb099288-3a53-5f58-abc3-54606d57d876", 00:21:57.512 "is_configured": true, 00:21:57.512 "data_offset": 2048, 00:21:57.512 "data_size": 63488 00:21:57.512 }, 00:21:57.512 { 00:21:57.512 "name": "BaseBdev4", 00:21:57.512 "uuid": "5cbc130b-a9e3-5ef2-9f71-eb093ae43679", 00:21:57.512 "is_configured": true, 00:21:57.512 "data_offset": 2048, 00:21:57.512 "data_size": 63488 00:21:57.512 } 00:21:57.512 ] 00:21:57.512 }' 00:21:57.512 10:47:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:57.512 10:47:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:57.512 10:47:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:57.771 10:47:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:57.771 10:47:24 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.771 10:47:24 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:58.029 10:47:24 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.029 10:47:24 -- bdev/bdev_raid.sh@709 -- # killprocess 136118 00:21:58.029 10:47:24 -- common/autotest_common.sh@926 -- # '[' -z 136118 ']' 00:21:58.029 10:47:24 -- common/autotest_common.sh@930 -- # kill -0 136118 00:21:58.029 10:47:24 -- common/autotest_common.sh@931 -- # uname 00:21:58.029 10:47:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.029 10:47:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136118 00:21:58.029 killing process with pid 136118 00:21:58.029 Received shutdown signal, test time was about 60.000000 seconds 00:21:58.029 00:21:58.029 Latency(us) 00:21:58.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.029 =================================================================================================================== 00:21:58.029 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.029 10:47:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:58.029 10:47:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:58.029 10:47:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136118' 00:21:58.029 10:47:24 -- common/autotest_common.sh@945 -- # kill 136118 00:21:58.029 10:47:24 -- common/autotest_common.sh@950 -- # wait 136118 00:21:58.029 [2024-07-24 10:47:24.530625] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.029 [2024-07-24 10:47:24.530753] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.029 [2024-07-24 10:47:24.530881] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.029 [2024-07-24 10:47:24.530904] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:21:58.029 [2024-07-24 10:47:24.606972] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.286 10:47:24 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:58.286 00:21:58.286 real 0m28.604s 00:21:58.286 user 0m42.431s 00:21:58.286 sys 0m4.464s 00:21:58.286 10:47:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.286 10:47:24 -- common/autotest_common.sh@10 -- # set +x 00:21:58.286 ************************************ 00:21:58.286 END TEST raid_rebuild_test_sb 00:21:58.286 ************************************ 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:58.545 10:47:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:58.545 10:47:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:58.545 10:47:25 -- common/autotest_common.sh@10 -- # set +x 00:21:58.545 ************************************ 00:21:58.545 START TEST raid_rebuild_test_io 00:21:58.545 ************************************ 00:21:58.545 10:47:25 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=136792 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136792 /var/tmp/spdk-raid.sock 00:21:58.545 10:47:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:58.545 10:47:25 -- common/autotest_common.sh@819 -- # '[' -z 136792 ']' 00:21:58.545 10:47:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:58.545 10:47:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:58.545 10:47:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:58.545 10:47:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.545 10:47:25 -- common/autotest_common.sh@10 -- # set +x 00:21:58.545 [2024-07-24 10:47:25.087989] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:58.545 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:58.545 Zero copy mechanism will not be used. 00:21:58.545 [2024-07-24 10:47:25.088253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136792 ] 00:21:58.803 [2024-07-24 10:47:25.236531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.803 [2024-07-24 10:47:25.372814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.803 [2024-07-24 10:47:25.452545] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.369 10:47:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.369 10:47:26 -- common/autotest_common.sh@852 -- # return 0 00:21:59.369 10:47:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:59.369 10:47:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:59.369 10:47:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:59.627 BaseBdev1 00:21:59.627 10:47:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:59.627 10:47:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:59.627 10:47:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:59.885 BaseBdev2 00:21:59.885 10:47:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:59.885 10:47:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:59.885 10:47:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:00.450 BaseBdev3 00:22:00.450 10:47:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:00.450 10:47:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:00.450 10:47:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:00.708 BaseBdev4 00:22:00.708 10:47:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:00.967 spare_malloc 00:22:00.967 10:47:27 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:01.225 spare_delay 00:22:01.225 10:47:27 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:01.483 [2024-07-24 10:47:27.985230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.483 [2024-07-24 10:47:27.985730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.483 [2024-07-24 10:47:27.985917] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:01.483 [2024-07-24 10:47:27.986083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.483 [2024-07-24 10:47:27.989420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.483 [2024-07-24 10:47:27.989658] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.483 spare 00:22:01.483 10:47:28 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:01.740 [2024-07-24 10:47:28.278301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.740 [2024-07-24 10:47:28.280964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.740 [2024-07-24 10:47:28.281163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.740 [2024-07-24 10:47:28.281323] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:01.740 [2024-07-24 10:47:28.281532] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:22:01.740 [2024-07-24 10:47:28.281647] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:01.740 [2024-07-24 10:47:28.281968] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:22:01.740 [2024-07-24 10:47:28.282555] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:22:01.740 [2024-07-24 10:47:28.282671] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:22:01.740 [2024-07-24 10:47:28.283055] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.740 10:47:28 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:01.740 10:47:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.741 10:47:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.998 10:47:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.998 "name": "raid_bdev1", 00:22:01.998 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:01.998 "strip_size_kb": 0, 00:22:01.998 "state": "online", 00:22:01.998 "raid_level": "raid1", 00:22:01.998 "superblock": false, 00:22:01.998 "num_base_bdevs": 4, 00:22:01.998 "num_base_bdevs_discovered": 4, 00:22:01.998 "num_base_bdevs_operational": 4, 00:22:01.998 "base_bdevs_list": [ 00:22:01.998 { 00:22:01.998 "name": "BaseBdev1", 00:22:01.998 "uuid": "25ffc5f6-c72a-4a99-bd91-13208b174647", 00:22:01.998 "is_configured": true, 00:22:01.998 "data_offset": 0, 00:22:01.998 "data_size": 65536 00:22:01.998 }, 00:22:01.998 { 00:22:01.998 "name": "BaseBdev2", 00:22:01.998 "uuid": "1576a402-ff09-4bc3-ac70-68f821d300b4", 00:22:01.998 "is_configured": true, 00:22:01.998 "data_offset": 0, 00:22:01.998 "data_size": 65536 00:22:01.998 }, 00:22:01.998 { 00:22:01.998 "name": "BaseBdev3", 00:22:01.998 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:01.998 "is_configured": true, 00:22:01.998 "data_offset": 0, 00:22:01.998 "data_size": 65536 00:22:01.998 }, 00:22:01.998 { 00:22:01.998 "name": "BaseBdev4", 00:22:01.998 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:01.998 "is_configured": true, 00:22:01.998 "data_offset": 0, 00:22:01.998 "data_size": 65536 00:22:01.998 } 00:22:01.998 ] 00:22:01.998 }' 00:22:01.998 10:47:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.998 10:47:28 -- common/autotest_common.sh@10 -- # set +x 00:22:02.566 10:47:29 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:02.566 10:47:29 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:02.824 [2024-07-24 10:47:29.491754] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:03.082 10:47:29 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:03.340 [2024-07-24 10:47:29.871497] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:22:03.340 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:03.340 Zero copy mechanism will not be used. 00:22:03.340 Running I/O for 60 seconds... 00:22:03.340 [2024-07-24 10:47:30.023986] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:03.598 [2024-07-24 10:47:30.040508] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.598 10:47:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.855 10:47:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.855 "name": "raid_bdev1", 00:22:03.855 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:03.855 "strip_size_kb": 0, 00:22:03.855 "state": "online", 00:22:03.855 "raid_level": "raid1", 00:22:03.855 "superblock": false, 00:22:03.855 "num_base_bdevs": 4, 00:22:03.855 "num_base_bdevs_discovered": 3, 00:22:03.855 "num_base_bdevs_operational": 3, 00:22:03.855 "base_bdevs_list": [ 00:22:03.855 { 00:22:03.855 "name": null, 00:22:03.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.855 "is_configured": false, 00:22:03.855 "data_offset": 0, 00:22:03.855 "data_size": 65536 00:22:03.855 }, 00:22:03.855 { 00:22:03.855 "name": "BaseBdev2", 00:22:03.855 "uuid": "1576a402-ff09-4bc3-ac70-68f821d300b4", 00:22:03.855 "is_configured": true, 00:22:03.855 "data_offset": 0, 00:22:03.855 "data_size": 65536 00:22:03.855 }, 00:22:03.855 { 00:22:03.856 "name": "BaseBdev3", 00:22:03.856 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:03.856 "is_configured": true, 00:22:03.856 "data_offset": 0, 00:22:03.856 "data_size": 65536 00:22:03.856 }, 00:22:03.856 { 00:22:03.856 "name": "BaseBdev4", 00:22:03.856 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:03.856 "is_configured": true, 00:22:03.856 "data_offset": 0, 00:22:03.856 "data_size": 65536 00:22:03.856 } 00:22:03.856 ] 00:22:03.856 }' 00:22:03.856 10:47:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.856 10:47:30 -- common/autotest_common.sh@10 -- # set +x 00:22:04.787 10:47:31 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:04.787 [2024-07-24 10:47:31.332400] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:04.787 [2024-07-24 10:47:31.332757] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:04.787 10:47:31 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:04.787 [2024-07-24 10:47:31.405589] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:22:04.787 [2024-07-24 10:47:31.408604] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:05.045 [2024-07-24 10:47:31.531266] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:05.045 [2024-07-24 10:47:31.532327] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:05.303 [2024-07-24 10:47:31.745784] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:05.303 [2024-07-24 10:47:31.746954] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:05.560 [2024-07-24 10:47:32.094620] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:05.818 [2024-07-24 10:47:32.309721] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:05.818 [2024-07-24 10:47:32.310580] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.818 10:47:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.076 [2024-07-24 10:47:32.655998] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:06.076 10:47:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:06.076 "name": "raid_bdev1", 00:22:06.076 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:06.076 "strip_size_kb": 0, 00:22:06.076 "state": "online", 00:22:06.076 "raid_level": "raid1", 00:22:06.076 "superblock": false, 00:22:06.076 "num_base_bdevs": 4, 00:22:06.076 "num_base_bdevs_discovered": 4, 00:22:06.076 "num_base_bdevs_operational": 4, 00:22:06.076 "process": { 00:22:06.076 "type": "rebuild", 00:22:06.076 "target": "spare", 00:22:06.076 "progress": { 00:22:06.076 "blocks": 16384, 00:22:06.076 "percent": 25 00:22:06.076 } 00:22:06.076 }, 00:22:06.076 "base_bdevs_list": [ 00:22:06.076 { 00:22:06.076 "name": "spare", 00:22:06.076 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:06.076 "is_configured": true, 00:22:06.076 "data_offset": 0, 00:22:06.076 "data_size": 65536 00:22:06.076 }, 00:22:06.076 { 00:22:06.076 "name": "BaseBdev2", 00:22:06.076 "uuid": "1576a402-ff09-4bc3-ac70-68f821d300b4", 00:22:06.076 "is_configured": true, 00:22:06.076 "data_offset": 0, 00:22:06.076 "data_size": 65536 00:22:06.076 }, 00:22:06.076 { 00:22:06.076 "name": "BaseBdev3", 00:22:06.076 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:06.076 "is_configured": true, 00:22:06.076 "data_offset": 0, 00:22:06.076 "data_size": 65536 00:22:06.076 }, 00:22:06.076 { 00:22:06.076 "name": "BaseBdev4", 00:22:06.076 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:06.076 "is_configured": true, 00:22:06.076 "data_offset": 0, 00:22:06.076 "data_size": 65536 00:22:06.076 } 00:22:06.076 ] 00:22:06.076 }' 00:22:06.076 10:47:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:06.076 10:47:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:06.076 10:47:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:06.334 10:47:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:06.334 10:47:32 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:06.334 [2024-07-24 10:47:33.007437] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:06.614 [2024-07-24 10:47:33.156622] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:06.614 [2024-07-24 10:47:33.161818] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:06.614 [2024-07-24 10:47:33.179479] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002390 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.614 10:47:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.871 10:47:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.871 "name": "raid_bdev1", 00:22:06.871 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:06.871 "strip_size_kb": 0, 00:22:06.871 "state": "online", 00:22:06.871 "raid_level": "raid1", 00:22:06.871 "superblock": false, 00:22:06.871 "num_base_bdevs": 4, 00:22:06.871 "num_base_bdevs_discovered": 3, 00:22:06.871 "num_base_bdevs_operational": 3, 00:22:06.871 "base_bdevs_list": [ 00:22:06.871 { 00:22:06.871 "name": null, 00:22:06.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.871 "is_configured": false, 00:22:06.871 "data_offset": 0, 00:22:06.871 "data_size": 65536 00:22:06.871 }, 00:22:06.871 { 00:22:06.871 "name": "BaseBdev2", 00:22:06.871 "uuid": "1576a402-ff09-4bc3-ac70-68f821d300b4", 00:22:06.871 "is_configured": true, 00:22:06.871 "data_offset": 0, 00:22:06.871 "data_size": 65536 00:22:06.871 }, 00:22:06.871 { 00:22:06.871 "name": "BaseBdev3", 00:22:06.871 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:06.871 "is_configured": true, 00:22:06.871 "data_offset": 0, 00:22:06.871 "data_size": 65536 00:22:06.871 }, 00:22:06.871 { 00:22:06.871 "name": "BaseBdev4", 00:22:06.871 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:06.871 "is_configured": true, 00:22:06.871 "data_offset": 0, 00:22:06.871 "data_size": 65536 00:22:06.871 } 00:22:06.871 ] 00:22:06.871 }' 00:22:06.871 10:47:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.871 10:47:33 -- common/autotest_common.sh@10 -- # set +x 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.804 10:47:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.063 10:47:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:08.063 "name": "raid_bdev1", 00:22:08.063 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:08.063 "strip_size_kb": 0, 00:22:08.063 "state": "online", 00:22:08.063 "raid_level": "raid1", 00:22:08.063 "superblock": false, 00:22:08.063 "num_base_bdevs": 4, 00:22:08.063 "num_base_bdevs_discovered": 3, 00:22:08.063 "num_base_bdevs_operational": 3, 00:22:08.063 "base_bdevs_list": [ 00:22:08.063 { 00:22:08.063 "name": null, 00:22:08.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.063 "is_configured": false, 00:22:08.063 "data_offset": 0, 00:22:08.063 "data_size": 65536 00:22:08.063 }, 00:22:08.063 { 00:22:08.063 "name": "BaseBdev2", 00:22:08.063 "uuid": "1576a402-ff09-4bc3-ac70-68f821d300b4", 00:22:08.063 "is_configured": true, 00:22:08.063 "data_offset": 0, 00:22:08.063 "data_size": 65536 00:22:08.063 }, 00:22:08.063 { 00:22:08.063 "name": "BaseBdev3", 00:22:08.063 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:08.063 "is_configured": true, 00:22:08.063 "data_offset": 0, 00:22:08.063 "data_size": 65536 00:22:08.063 }, 00:22:08.063 { 00:22:08.063 "name": "BaseBdev4", 00:22:08.063 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:08.063 "is_configured": true, 00:22:08.063 "data_offset": 0, 00:22:08.063 "data_size": 65536 00:22:08.063 } 00:22:08.063 ] 00:22:08.063 }' 00:22:08.063 10:47:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:08.063 10:47:34 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:08.063 10:47:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:08.063 10:47:34 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:08.063 10:47:34 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:08.320 [2024-07-24 10:47:34.882613] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:08.320 [2024-07-24 10:47:34.882991] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:08.320 [2024-07-24 10:47:34.932714] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:08.320 [2024-07-24 10:47:34.935598] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:08.320 10:47:34 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:08.578 [2024-07-24 10:47:35.057616] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:08.578 [2024-07-24 10:47:35.058569] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:08.836 [2024-07-24 10:47:35.289381] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:08.836 [2024-07-24 10:47:35.290208] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:09.094 [2024-07-24 10:47:35.668125] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:09.352 [2024-07-24 10:47:35.898418] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:09.352 [2024-07-24 10:47:35.899658] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.352 10:47:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.611 "name": "raid_bdev1", 00:22:09.611 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:09.611 "strip_size_kb": 0, 00:22:09.611 "state": "online", 00:22:09.611 "raid_level": "raid1", 00:22:09.611 "superblock": false, 00:22:09.611 "num_base_bdevs": 4, 00:22:09.611 "num_base_bdevs_discovered": 4, 00:22:09.611 "num_base_bdevs_operational": 4, 00:22:09.611 "process": { 00:22:09.611 "type": "rebuild", 00:22:09.611 "target": "spare", 00:22:09.611 "progress": { 00:22:09.611 "blocks": 12288, 00:22:09.611 "percent": 18 00:22:09.611 } 00:22:09.611 }, 00:22:09.611 "base_bdevs_list": [ 00:22:09.611 { 00:22:09.611 "name": "spare", 00:22:09.611 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:09.611 "is_configured": true, 00:22:09.611 "data_offset": 0, 00:22:09.611 "data_size": 65536 00:22:09.611 }, 00:22:09.611 { 00:22:09.611 "name": "BaseBdev2", 00:22:09.611 "uuid": "1576a402-ff09-4bc3-ac70-68f821d300b4", 00:22:09.611 "is_configured": true, 00:22:09.611 "data_offset": 0, 00:22:09.611 "data_size": 65536 00:22:09.611 }, 00:22:09.611 { 00:22:09.611 "name": "BaseBdev3", 00:22:09.611 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:09.611 "is_configured": true, 00:22:09.611 "data_offset": 0, 00:22:09.611 "data_size": 65536 00:22:09.611 }, 00:22:09.611 { 00:22:09.611 "name": "BaseBdev4", 00:22:09.611 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:09.611 "is_configured": true, 00:22:09.611 "data_offset": 0, 00:22:09.611 "data_size": 65536 00:22:09.611 } 00:22:09.611 ] 00:22:09.611 }' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:09.611 10:47:36 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:09.870 [2024-07-24 10:47:36.418620] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:09.870 [2024-07-24 10:47:36.506184] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.138 [2024-07-24 10:47:36.742045] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002390 00:22:10.138 [2024-07-24 10:47:36.742495] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002600 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.138 10:47:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.414 [2024-07-24 10:47:36.994854] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:10.414 10:47:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.414 "name": "raid_bdev1", 00:22:10.414 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:10.414 "strip_size_kb": 0, 00:22:10.414 "state": "online", 00:22:10.414 "raid_level": "raid1", 00:22:10.414 "superblock": false, 00:22:10.414 "num_base_bdevs": 4, 00:22:10.414 "num_base_bdevs_discovered": 3, 00:22:10.414 "num_base_bdevs_operational": 3, 00:22:10.414 "process": { 00:22:10.414 "type": "rebuild", 00:22:10.414 "target": "spare", 00:22:10.414 "progress": { 00:22:10.414 "blocks": 20480, 00:22:10.414 "percent": 31 00:22:10.414 } 00:22:10.414 }, 00:22:10.414 "base_bdevs_list": [ 00:22:10.414 { 00:22:10.414 "name": "spare", 00:22:10.414 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:10.414 "is_configured": true, 00:22:10.414 "data_offset": 0, 00:22:10.414 "data_size": 65536 00:22:10.414 }, 00:22:10.414 { 00:22:10.414 "name": null, 00:22:10.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.414 "is_configured": false, 00:22:10.414 "data_offset": 0, 00:22:10.414 "data_size": 65536 00:22:10.414 }, 00:22:10.414 { 00:22:10.414 "name": "BaseBdev3", 00:22:10.414 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:10.414 "is_configured": true, 00:22:10.414 "data_offset": 0, 00:22:10.414 "data_size": 65536 00:22:10.414 }, 00:22:10.414 { 00:22:10.414 "name": "BaseBdev4", 00:22:10.414 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:10.414 "is_configured": true, 00:22:10.414 "data_offset": 0, 00:22:10.414 "data_size": 65536 00:22:10.414 } 00:22:10.414 ] 00:22:10.414 }' 00:22:10.414 10:47:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.414 10:47:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.414 10:47:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:10.414 10:47:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.414 10:47:37 -- bdev/bdev_raid.sh@657 -- # local timeout=537 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.673 10:47:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.931 10:47:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.931 "name": "raid_bdev1", 00:22:10.931 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:10.931 "strip_size_kb": 0, 00:22:10.931 "state": "online", 00:22:10.931 "raid_level": "raid1", 00:22:10.931 "superblock": false, 00:22:10.931 "num_base_bdevs": 4, 00:22:10.931 "num_base_bdevs_discovered": 3, 00:22:10.931 "num_base_bdevs_operational": 3, 00:22:10.931 "process": { 00:22:10.931 "type": "rebuild", 00:22:10.931 "target": "spare", 00:22:10.931 "progress": { 00:22:10.931 "blocks": 24576, 00:22:10.931 "percent": 37 00:22:10.931 } 00:22:10.931 }, 00:22:10.931 "base_bdevs_list": [ 00:22:10.931 { 00:22:10.931 "name": "spare", 00:22:10.931 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:10.931 "is_configured": true, 00:22:10.931 "data_offset": 0, 00:22:10.931 "data_size": 65536 00:22:10.931 }, 00:22:10.931 { 00:22:10.931 "name": null, 00:22:10.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.931 "is_configured": false, 00:22:10.931 "data_offset": 0, 00:22:10.931 "data_size": 65536 00:22:10.931 }, 00:22:10.931 { 00:22:10.931 "name": "BaseBdev3", 00:22:10.931 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:10.931 "is_configured": true, 00:22:10.931 "data_offset": 0, 00:22:10.931 "data_size": 65536 00:22:10.931 }, 00:22:10.931 { 00:22:10.931 "name": "BaseBdev4", 00:22:10.931 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:10.931 "is_configured": true, 00:22:10.931 "data_offset": 0, 00:22:10.931 "data_size": 65536 00:22:10.931 } 00:22:10.931 ] 00:22:10.931 }' 00:22:10.931 10:47:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.931 10:47:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.931 10:47:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:10.931 10:47:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.931 10:47:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:11.189 [2024-07-24 10:47:37.766118] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:11.189 [2024-07-24 10:47:37.767188] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:11.446 [2024-07-24 10:47:37.999783] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:11.705 [2024-07-24 10:47:38.336093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.963 10:47:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.963 [2024-07-24 10:47:38.573802] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:11.963 [2024-07-24 10:47:38.574472] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:12.220 10:47:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.220 "name": "raid_bdev1", 00:22:12.220 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:12.220 "strip_size_kb": 0, 00:22:12.220 "state": "online", 00:22:12.220 "raid_level": "raid1", 00:22:12.220 "superblock": false, 00:22:12.220 "num_base_bdevs": 4, 00:22:12.220 "num_base_bdevs_discovered": 3, 00:22:12.220 "num_base_bdevs_operational": 3, 00:22:12.220 "process": { 00:22:12.220 "type": "rebuild", 00:22:12.220 "target": "spare", 00:22:12.220 "progress": { 00:22:12.220 "blocks": 40960, 00:22:12.220 "percent": 62 00:22:12.220 } 00:22:12.220 }, 00:22:12.220 "base_bdevs_list": [ 00:22:12.220 { 00:22:12.220 "name": "spare", 00:22:12.220 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:12.220 "is_configured": true, 00:22:12.220 "data_offset": 0, 00:22:12.220 "data_size": 65536 00:22:12.220 }, 00:22:12.220 { 00:22:12.220 "name": null, 00:22:12.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.220 "is_configured": false, 00:22:12.220 "data_offset": 0, 00:22:12.220 "data_size": 65536 00:22:12.220 }, 00:22:12.220 { 00:22:12.220 "name": "BaseBdev3", 00:22:12.220 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:12.220 "is_configured": true, 00:22:12.220 "data_offset": 0, 00:22:12.220 "data_size": 65536 00:22:12.220 }, 00:22:12.220 { 00:22:12.220 "name": "BaseBdev4", 00:22:12.220 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:12.220 "is_configured": true, 00:22:12.220 "data_offset": 0, 00:22:12.220 "data_size": 65536 00:22:12.220 } 00:22:12.220 ] 00:22:12.220 }' 00:22:12.220 10:47:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.220 10:47:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.220 10:47:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.220 10:47:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.220 10:47:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:12.478 [2024-07-24 10:47:39.030585] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.411 10:47:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.411 [2024-07-24 10:47:40.067998] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:13.671 10:47:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.671 "name": "raid_bdev1", 00:22:13.671 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:13.671 "strip_size_kb": 0, 00:22:13.671 "state": "online", 00:22:13.671 "raid_level": "raid1", 00:22:13.671 "superblock": false, 00:22:13.671 "num_base_bdevs": 4, 00:22:13.671 "num_base_bdevs_discovered": 3, 00:22:13.672 "num_base_bdevs_operational": 3, 00:22:13.672 "process": { 00:22:13.672 "type": "rebuild", 00:22:13.672 "target": "spare", 00:22:13.672 "progress": { 00:22:13.672 "blocks": 65536, 00:22:13.672 "percent": 100 00:22:13.672 } 00:22:13.672 }, 00:22:13.672 "base_bdevs_list": [ 00:22:13.672 { 00:22:13.672 "name": "spare", 00:22:13.672 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:13.672 "is_configured": true, 00:22:13.672 "data_offset": 0, 00:22:13.672 "data_size": 65536 00:22:13.672 }, 00:22:13.672 { 00:22:13.672 "name": null, 00:22:13.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.672 "is_configured": false, 00:22:13.672 "data_offset": 0, 00:22:13.672 "data_size": 65536 00:22:13.672 }, 00:22:13.672 { 00:22:13.672 "name": "BaseBdev3", 00:22:13.672 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:13.672 "is_configured": true, 00:22:13.672 "data_offset": 0, 00:22:13.672 "data_size": 65536 00:22:13.672 }, 00:22:13.672 { 00:22:13.672 "name": "BaseBdev4", 00:22:13.672 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:13.672 "is_configured": true, 00:22:13.672 "data_offset": 0, 00:22:13.672 "data_size": 65536 00:22:13.672 } 00:22:13.672 ] 00:22:13.672 }' 00:22:13.672 10:47:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.672 [2024-07-24 10:47:40.168102] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:13.672 [2024-07-24 10:47:40.171106] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.672 10:47:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:13.672 10:47:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.672 10:47:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.672 10:47:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.606 10:47:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.864 10:47:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.864 "name": "raid_bdev1", 00:22:14.864 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:14.864 "strip_size_kb": 0, 00:22:14.864 "state": "online", 00:22:14.864 "raid_level": "raid1", 00:22:14.864 "superblock": false, 00:22:14.864 "num_base_bdevs": 4, 00:22:14.864 "num_base_bdevs_discovered": 3, 00:22:14.864 "num_base_bdevs_operational": 3, 00:22:14.864 "base_bdevs_list": [ 00:22:14.864 { 00:22:14.864 "name": "spare", 00:22:14.864 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:14.864 "is_configured": true, 00:22:14.864 "data_offset": 0, 00:22:14.864 "data_size": 65536 00:22:14.864 }, 00:22:14.864 { 00:22:14.864 "name": null, 00:22:14.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.864 "is_configured": false, 00:22:14.864 "data_offset": 0, 00:22:14.864 "data_size": 65536 00:22:14.864 }, 00:22:14.864 { 00:22:14.864 "name": "BaseBdev3", 00:22:14.864 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:14.864 "is_configured": true, 00:22:14.864 "data_offset": 0, 00:22:14.864 "data_size": 65536 00:22:14.864 }, 00:22:14.864 { 00:22:14.864 "name": "BaseBdev4", 00:22:14.864 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:14.864 "is_configured": true, 00:22:14.864 "data_offset": 0, 00:22:14.864 "data_size": 65536 00:22:14.864 } 00:22:14.864 ] 00:22:14.864 }' 00:22:14.864 10:47:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@660 -- # break 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.123 10:47:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.380 10:47:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.380 "name": "raid_bdev1", 00:22:15.380 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:15.380 "strip_size_kb": 0, 00:22:15.380 "state": "online", 00:22:15.380 "raid_level": "raid1", 00:22:15.380 "superblock": false, 00:22:15.380 "num_base_bdevs": 4, 00:22:15.380 "num_base_bdevs_discovered": 3, 00:22:15.380 "num_base_bdevs_operational": 3, 00:22:15.380 "base_bdevs_list": [ 00:22:15.380 { 00:22:15.380 "name": "spare", 00:22:15.381 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:15.381 "is_configured": true, 00:22:15.381 "data_offset": 0, 00:22:15.381 "data_size": 65536 00:22:15.381 }, 00:22:15.381 { 00:22:15.381 "name": null, 00:22:15.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.381 "is_configured": false, 00:22:15.381 "data_offset": 0, 00:22:15.381 "data_size": 65536 00:22:15.381 }, 00:22:15.381 { 00:22:15.381 "name": "BaseBdev3", 00:22:15.381 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:15.381 "is_configured": true, 00:22:15.381 "data_offset": 0, 00:22:15.381 "data_size": 65536 00:22:15.381 }, 00:22:15.381 { 00:22:15.381 "name": "BaseBdev4", 00:22:15.381 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:15.381 "is_configured": true, 00:22:15.381 "data_offset": 0, 00:22:15.381 "data_size": 65536 00:22:15.381 } 00:22:15.381 ] 00:22:15.381 }' 00:22:15.381 10:47:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.381 10:47:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:15.381 10:47:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.381 10:47:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.639 10:47:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:15.639 "name": "raid_bdev1", 00:22:15.639 "uuid": "2c9a3260-66cb-4b50-907b-3915c681c6c9", 00:22:15.639 "strip_size_kb": 0, 00:22:15.639 "state": "online", 00:22:15.639 "raid_level": "raid1", 00:22:15.639 "superblock": false, 00:22:15.639 "num_base_bdevs": 4, 00:22:15.639 "num_base_bdevs_discovered": 3, 00:22:15.639 "num_base_bdevs_operational": 3, 00:22:15.639 "base_bdevs_list": [ 00:22:15.639 { 00:22:15.639 "name": "spare", 00:22:15.639 "uuid": "f8131692-bf06-559a-9384-c4d50324d235", 00:22:15.639 "is_configured": true, 00:22:15.639 "data_offset": 0, 00:22:15.639 "data_size": 65536 00:22:15.639 }, 00:22:15.639 { 00:22:15.639 "name": null, 00:22:15.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.639 "is_configured": false, 00:22:15.639 "data_offset": 0, 00:22:15.639 "data_size": 65536 00:22:15.639 }, 00:22:15.639 { 00:22:15.639 "name": "BaseBdev3", 00:22:15.639 "uuid": "55112cf2-c618-4d27-950a-ce18ecd35f1b", 00:22:15.639 "is_configured": true, 00:22:15.639 "data_offset": 0, 00:22:15.639 "data_size": 65536 00:22:15.639 }, 00:22:15.639 { 00:22:15.639 "name": "BaseBdev4", 00:22:15.639 "uuid": "f919e399-ba62-4243-a84e-81c0958d6d87", 00:22:15.639 "is_configured": true, 00:22:15.639 "data_offset": 0, 00:22:15.639 "data_size": 65536 00:22:15.639 } 00:22:15.639 ] 00:22:15.639 }' 00:22:15.639 10:47:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:15.639 10:47:42 -- common/autotest_common.sh@10 -- # set +x 00:22:16.572 10:47:43 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:16.827 [2024-07-24 10:47:43.257522] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.828 [2024-07-24 10:47:43.257650] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.828 00:22:16.828 Latency(us) 00:22:16.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.828 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:16.828 raid_bdev1 : 13.42 82.10 246.31 0.00 0.00 17222.80 297.89 123922.62 00:22:16.828 =================================================================================================================== 00:22:16.828 Total : 82.10 246.31 0.00 0.00 17222.80 297.89 123922.62 00:22:16.828 [2024-07-24 10:47:43.302989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.828 [2024-07-24 10:47:43.303114] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.828 [2024-07-24 10:47:43.303247] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.828 [2024-07-24 10:47:43.303264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:22:16.828 0 00:22:16.828 10:47:43 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.828 10:47:43 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:17.084 10:47:43 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:17.084 10:47:43 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:17.084 10:47:43 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@12 -- # local i 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.084 10:47:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:17.341 /dev/nbd0 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:17.341 10:47:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:17.341 10:47:43 -- common/autotest_common.sh@857 -- # local i 00:22:17.341 10:47:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:17.341 10:47:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:17.341 10:47:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:17.341 10:47:43 -- common/autotest_common.sh@861 -- # break 00:22:17.341 10:47:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:17.341 10:47:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:17.341 10:47:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.341 1+0 records in 00:22:17.341 1+0 records out 00:22:17.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642238 s, 6.4 MB/s 00:22:17.341 10:47:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.341 10:47:43 -- common/autotest_common.sh@874 -- # size=4096 00:22:17.341 10:47:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.341 10:47:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:17.341 10:47:43 -- common/autotest_common.sh@877 -- # return 0 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.341 10:47:43 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:17.341 10:47:43 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:17.341 10:47:43 -- bdev/bdev_raid.sh@678 -- # continue 00:22:17.341 10:47:43 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:17.341 10:47:43 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:17.341 10:47:43 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@12 -- # local i 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:17.341 10:47:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.342 10:47:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:17.598 /dev/nbd1 00:22:17.909 10:47:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:17.909 10:47:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:17.909 10:47:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:17.909 10:47:44 -- common/autotest_common.sh@857 -- # local i 00:22:17.909 10:47:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:17.909 10:47:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:17.909 10:47:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:17.909 10:47:44 -- common/autotest_common.sh@861 -- # break 00:22:17.909 10:47:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:17.909 10:47:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:17.909 10:47:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:17.909 1+0 records in 00:22:17.909 1+0 records out 00:22:17.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494958 s, 8.3 MB/s 00:22:17.910 10:47:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.910 10:47:44 -- common/autotest_common.sh@874 -- # size=4096 00:22:17.910 10:47:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:17.910 10:47:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:17.910 10:47:44 -- common/autotest_common.sh@877 -- # return 0 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:17.910 10:47:44 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:17.910 10:47:44 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.910 10:47:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@41 -- # break 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.174 10:47:44 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:18.174 10:47:44 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:18.174 10:47:44 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@12 -- # local i 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:18.174 10:47:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:18.432 /dev/nbd1 00:22:18.432 10:47:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:18.432 10:47:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:18.432 10:47:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:18.432 10:47:45 -- common/autotest_common.sh@857 -- # local i 00:22:18.432 10:47:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:18.432 10:47:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:18.432 10:47:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:18.432 10:47:45 -- common/autotest_common.sh@861 -- # break 00:22:18.432 10:47:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:18.432 10:47:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:18.432 10:47:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.689 1+0 records in 00:22:18.689 1+0 records out 00:22:18.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401393 s, 10.2 MB/s 00:22:18.689 10:47:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.689 10:47:45 -- common/autotest_common.sh@874 -- # size=4096 00:22:18.689 10:47:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.689 10:47:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:18.689 10:47:45 -- common/autotest_common.sh@877 -- # return 0 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:18.689 10:47:45 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:18.689 10:47:45 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@51 -- # local i 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.689 10:47:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@41 -- # break 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@45 -- # return 0 00:22:18.947 10:47:45 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@51 -- # local i 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:18.947 10:47:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:19.204 10:47:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@41 -- # break 00:22:19.205 10:47:45 -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.205 10:47:45 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:19.205 10:47:45 -- bdev/bdev_raid.sh@709 -- # killprocess 136792 00:22:19.205 10:47:45 -- common/autotest_common.sh@926 -- # '[' -z 136792 ']' 00:22:19.205 10:47:45 -- common/autotest_common.sh@930 -- # kill -0 136792 00:22:19.205 10:47:45 -- common/autotest_common.sh@931 -- # uname 00:22:19.205 10:47:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.205 10:47:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136792 00:22:19.205 10:47:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:19.205 10:47:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:19.205 10:47:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136792' 00:22:19.205 killing process with pid 136792 00:22:19.205 10:47:45 -- common/autotest_common.sh@945 -- # kill 136792 00:22:19.205 Received shutdown signal, test time was about 15.829311 seconds 00:22:19.205 00:22:19.205 Latency(us) 00:22:19.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.205 =================================================================================================================== 00:22:19.205 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.205 [2024-07-24 10:47:45.704412] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:19.205 10:47:45 -- common/autotest_common.sh@950 -- # wait 136792 00:22:19.205 [2024-07-24 10:47:45.768954] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:19.770 00:22:19.770 real 0m21.156s 00:22:19.770 user 0m33.327s 00:22:19.770 sys 0m2.789s 00:22:19.770 10:47:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:19.770 10:47:46 -- common/autotest_common.sh@10 -- # set +x 00:22:19.770 ************************************ 00:22:19.770 END TEST raid_rebuild_test_io 00:22:19.770 ************************************ 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:19.770 10:47:46 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:19.770 10:47:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:19.770 10:47:46 -- common/autotest_common.sh@10 -- # set +x 00:22:19.770 ************************************ 00:22:19.770 START TEST raid_rebuild_test_sb_io 00:22:19.770 ************************************ 00:22:19.770 10:47:46 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@544 -- # raid_pid=137336 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137336 /var/tmp/spdk-raid.sock 00:22:19.770 10:47:46 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.770 10:47:46 -- common/autotest_common.sh@819 -- # '[' -z 137336 ']' 00:22:19.770 10:47:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.770 10:47:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.770 10:47:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.770 10:47:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.770 10:47:46 -- common/autotest_common.sh@10 -- # set +x 00:22:19.770 [2024-07-24 10:47:46.311388] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:22:19.771 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.771 Zero copy mechanism will not be used. 00:22:19.771 [2024-07-24 10:47:46.311659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137336 ] 00:22:20.028 [2024-07-24 10:47:46.457144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.028 [2024-07-24 10:47:46.584515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.028 [2024-07-24 10:47:46.661475] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.960 10:47:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.960 10:47:47 -- common/autotest_common.sh@852 -- # return 0 00:22:20.960 10:47:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.960 10:47:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.960 10:47:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:20.960 BaseBdev1_malloc 00:22:20.960 10:47:47 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:21.525 [2024-07-24 10:47:47.929925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:21.525 [2024-07-24 10:47:47.930090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.525 [2024-07-24 10:47:47.930142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:22:21.525 [2024-07-24 10:47:47.930235] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.525 [2024-07-24 10:47:47.933624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.525 [2024-07-24 10:47:47.933719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.525 BaseBdev1 00:22:21.525 10:47:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:21.525 10:47:47 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:21.525 10:47:47 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:21.525 BaseBdev2_malloc 00:22:21.783 10:47:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:22.041 [2024-07-24 10:47:48.502755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:22.041 [2024-07-24 10:47:48.502897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.041 [2024-07-24 10:47:48.502954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:22:22.041 [2024-07-24 10:47:48.503017] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.041 [2024-07-24 10:47:48.505810] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.041 [2024-07-24 10:47:48.505870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:22.041 BaseBdev2 00:22:22.041 10:47:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:22.041 10:47:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:22.041 10:47:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:22.298 BaseBdev3_malloc 00:22:22.299 10:47:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:22.556 [2024-07-24 10:47:49.149375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:22.556 [2024-07-24 10:47:49.149514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.556 [2024-07-24 10:47:49.149588] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:22.556 [2024-07-24 10:47:49.149671] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.556 [2024-07-24 10:47:49.152765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.556 [2024-07-24 10:47:49.152846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:22.556 BaseBdev3 00:22:22.556 10:47:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:22.556 10:47:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:22.556 10:47:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:22.814 BaseBdev4_malloc 00:22:22.814 10:47:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:23.072 [2024-07-24 10:47:49.709977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:23.072 [2024-07-24 10:47:49.710120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.072 [2024-07-24 10:47:49.710174] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:23.072 [2024-07-24 10:47:49.710238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.072 [2024-07-24 10:47:49.713204] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.072 [2024-07-24 10:47:49.713289] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:23.072 BaseBdev4 00:22:23.072 10:47:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:23.329 spare_malloc 00:22:23.330 10:47:49 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:23.587 spare_delay 00:22:23.587 10:47:50 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:23.844 [2024-07-24 10:47:50.474626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:23.844 [2024-07-24 10:47:50.474805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.844 [2024-07-24 10:47:50.474858] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:23.844 [2024-07-24 10:47:50.474912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.844 [2024-07-24 10:47:50.477939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.844 [2024-07-24 10:47:50.478020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:23.844 spare 00:22:23.844 10:47:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:24.103 [2024-07-24 10:47:50.726984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:24.103 [2024-07-24 10:47:50.729703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.103 [2024-07-24 10:47:50.729807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:24.103 [2024-07-24 10:47:50.729870] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:24.103 [2024-07-24 10:47:50.730199] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:24.103 [2024-07-24 10:47:50.730226] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:24.103 [2024-07-24 10:47:50.730420] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:22:24.103 [2024-07-24 10:47:50.730935] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:24.103 [2024-07-24 10:47:50.730962] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:24.103 [2024-07-24 10:47:50.731254] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.103 10:47:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.362 10:47:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.362 "name": "raid_bdev1", 00:22:24.362 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:24.362 "strip_size_kb": 0, 00:22:24.362 "state": "online", 00:22:24.362 "raid_level": "raid1", 00:22:24.362 "superblock": true, 00:22:24.362 "num_base_bdevs": 4, 00:22:24.362 "num_base_bdevs_discovered": 4, 00:22:24.362 "num_base_bdevs_operational": 4, 00:22:24.362 "base_bdevs_list": [ 00:22:24.362 { 00:22:24.362 "name": "BaseBdev1", 00:22:24.362 "uuid": "f21ba17c-c339-5dc6-a7c1-f55007a26b6a", 00:22:24.362 "is_configured": true, 00:22:24.362 "data_offset": 2048, 00:22:24.362 "data_size": 63488 00:22:24.362 }, 00:22:24.362 { 00:22:24.362 "name": "BaseBdev2", 00:22:24.362 "uuid": "b6bfe3ba-19b1-58a5-89d5-6a5d4cdf6336", 00:22:24.362 "is_configured": true, 00:22:24.362 "data_offset": 2048, 00:22:24.362 "data_size": 63488 00:22:24.362 }, 00:22:24.362 { 00:22:24.362 "name": "BaseBdev3", 00:22:24.362 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:24.362 "is_configured": true, 00:22:24.362 "data_offset": 2048, 00:22:24.362 "data_size": 63488 00:22:24.362 }, 00:22:24.362 { 00:22:24.362 "name": "BaseBdev4", 00:22:24.362 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:24.362 "is_configured": true, 00:22:24.362 "data_offset": 2048, 00:22:24.362 "data_size": 63488 00:22:24.362 } 00:22:24.362 ] 00:22:24.362 }' 00:22:24.362 10:47:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.362 10:47:51 -- common/autotest_common.sh@10 -- # set +x 00:22:25.298 10:47:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:25.298 10:47:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:25.298 [2024-07-24 10:47:51.975981] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.556 10:47:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:25.556 10:47:52 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.556 10:47:52 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:25.866 10:47:52 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:25.866 10:47:52 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:25.866 10:47:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:25.866 10:47:52 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:25.866 [2024-07-24 10:47:52.363747] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:22:25.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:25.866 Zero copy mechanism will not be used. 00:22:25.866 Running I/O for 60 seconds... 00:22:25.866 [2024-07-24 10:47:52.524419] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:26.164 [2024-07-24 10:47:52.532515] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.164 10:47:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.422 10:47:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:26.422 "name": "raid_bdev1", 00:22:26.422 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:26.422 "strip_size_kb": 0, 00:22:26.422 "state": "online", 00:22:26.422 "raid_level": "raid1", 00:22:26.422 "superblock": true, 00:22:26.422 "num_base_bdevs": 4, 00:22:26.422 "num_base_bdevs_discovered": 3, 00:22:26.422 "num_base_bdevs_operational": 3, 00:22:26.422 "base_bdevs_list": [ 00:22:26.422 { 00:22:26.422 "name": null, 00:22:26.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.422 "is_configured": false, 00:22:26.422 "data_offset": 2048, 00:22:26.422 "data_size": 63488 00:22:26.422 }, 00:22:26.422 { 00:22:26.422 "name": "BaseBdev2", 00:22:26.422 "uuid": "b6bfe3ba-19b1-58a5-89d5-6a5d4cdf6336", 00:22:26.422 "is_configured": true, 00:22:26.422 "data_offset": 2048, 00:22:26.422 "data_size": 63488 00:22:26.422 }, 00:22:26.422 { 00:22:26.422 "name": "BaseBdev3", 00:22:26.422 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:26.422 "is_configured": true, 00:22:26.422 "data_offset": 2048, 00:22:26.422 "data_size": 63488 00:22:26.422 }, 00:22:26.422 { 00:22:26.422 "name": "BaseBdev4", 00:22:26.422 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:26.422 "is_configured": true, 00:22:26.422 "data_offset": 2048, 00:22:26.422 "data_size": 63488 00:22:26.422 } 00:22:26.422 ] 00:22:26.422 }' 00:22:26.422 10:47:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:26.422 10:47:52 -- common/autotest_common.sh@10 -- # set +x 00:22:27.356 10:47:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:27.356 [2024-07-24 10:47:53.977928] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:27.356 [2024-07-24 10:47:53.978011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:27.356 10:47:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:27.356 [2024-07-24 10:47:54.022085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:22:27.356 [2024-07-24 10:47:54.025002] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:27.614 [2024-07-24 10:47:54.147049] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:27.614 [2024-07-24 10:47:54.147842] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:27.873 [2024-07-24 10:47:54.424882] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:28.130 [2024-07-24 10:47:54.789475] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:28.388 [2024-07-24 10:47:54.939095] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.388 10:47:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.646 [2024-07-24 10:47:55.270199] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:28.904 10:47:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.904 "name": "raid_bdev1", 00:22:28.904 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:28.904 "strip_size_kb": 0, 00:22:28.904 "state": "online", 00:22:28.904 "raid_level": "raid1", 00:22:28.904 "superblock": true, 00:22:28.904 "num_base_bdevs": 4, 00:22:28.904 "num_base_bdevs_discovered": 4, 00:22:28.904 "num_base_bdevs_operational": 4, 00:22:28.904 "process": { 00:22:28.904 "type": "rebuild", 00:22:28.904 "target": "spare", 00:22:28.904 "progress": { 00:22:28.904 "blocks": 14336, 00:22:28.904 "percent": 22 00:22:28.904 } 00:22:28.904 }, 00:22:28.904 "base_bdevs_list": [ 00:22:28.904 { 00:22:28.904 "name": "spare", 00:22:28.904 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:28.904 "is_configured": true, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 }, 00:22:28.904 { 00:22:28.904 "name": "BaseBdev2", 00:22:28.904 "uuid": "b6bfe3ba-19b1-58a5-89d5-6a5d4cdf6336", 00:22:28.904 "is_configured": true, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 }, 00:22:28.904 { 00:22:28.904 "name": "BaseBdev3", 00:22:28.904 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:28.904 "is_configured": true, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 }, 00:22:28.904 { 00:22:28.904 "name": "BaseBdev4", 00:22:28.904 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:28.904 "is_configured": true, 00:22:28.904 "data_offset": 2048, 00:22:28.904 "data_size": 63488 00:22:28.904 } 00:22:28.904 ] 00:22:28.904 }' 00:22:28.904 10:47:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.904 [2024-07-24 10:47:55.383164] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:28.904 [2024-07-24 10:47:55.384430] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:28.904 10:47:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.904 10:47:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.904 10:47:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.904 10:47:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:29.163 [2024-07-24 10:47:55.667297] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:29.163 [2024-07-24 10:47:55.759088] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:29.163 [2024-07-24 10:47:55.774222] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:29.163 [2024-07-24 10:47:55.811444] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.421 10:47:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.749 10:47:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:29.749 "name": "raid_bdev1", 00:22:29.749 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:29.749 "strip_size_kb": 0, 00:22:29.749 "state": "online", 00:22:29.749 "raid_level": "raid1", 00:22:29.749 "superblock": true, 00:22:29.749 "num_base_bdevs": 4, 00:22:29.749 "num_base_bdevs_discovered": 3, 00:22:29.749 "num_base_bdevs_operational": 3, 00:22:29.749 "base_bdevs_list": [ 00:22:29.749 { 00:22:29.749 "name": null, 00:22:29.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.749 "is_configured": false, 00:22:29.749 "data_offset": 2048, 00:22:29.749 "data_size": 63488 00:22:29.749 }, 00:22:29.749 { 00:22:29.749 "name": "BaseBdev2", 00:22:29.749 "uuid": "b6bfe3ba-19b1-58a5-89d5-6a5d4cdf6336", 00:22:29.749 "is_configured": true, 00:22:29.749 "data_offset": 2048, 00:22:29.749 "data_size": 63488 00:22:29.749 }, 00:22:29.749 { 00:22:29.749 "name": "BaseBdev3", 00:22:29.749 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:29.749 "is_configured": true, 00:22:29.749 "data_offset": 2048, 00:22:29.749 "data_size": 63488 00:22:29.749 }, 00:22:29.749 { 00:22:29.749 "name": "BaseBdev4", 00:22:29.749 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:29.749 "is_configured": true, 00:22:29.749 "data_offset": 2048, 00:22:29.749 "data_size": 63488 00:22:29.749 } 00:22:29.749 ] 00:22:29.749 }' 00:22:29.749 10:47:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:29.749 10:47:56 -- common/autotest_common.sh@10 -- # set +x 00:22:30.316 10:47:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.317 10:47:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.317 10:47:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:30.317 10:47:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:30.317 10:47:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.317 10:47:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.317 10:47:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.575 10:47:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.575 "name": "raid_bdev1", 00:22:30.575 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:30.575 "strip_size_kb": 0, 00:22:30.575 "state": "online", 00:22:30.575 "raid_level": "raid1", 00:22:30.575 "superblock": true, 00:22:30.575 "num_base_bdevs": 4, 00:22:30.575 "num_base_bdevs_discovered": 3, 00:22:30.575 "num_base_bdevs_operational": 3, 00:22:30.575 "base_bdevs_list": [ 00:22:30.575 { 00:22:30.575 "name": null, 00:22:30.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.575 "is_configured": false, 00:22:30.575 "data_offset": 2048, 00:22:30.575 "data_size": 63488 00:22:30.575 }, 00:22:30.575 { 00:22:30.575 "name": "BaseBdev2", 00:22:30.575 "uuid": "b6bfe3ba-19b1-58a5-89d5-6a5d4cdf6336", 00:22:30.575 "is_configured": true, 00:22:30.575 "data_offset": 2048, 00:22:30.575 "data_size": 63488 00:22:30.575 }, 00:22:30.575 { 00:22:30.575 "name": "BaseBdev3", 00:22:30.575 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:30.575 "is_configured": true, 00:22:30.575 "data_offset": 2048, 00:22:30.575 "data_size": 63488 00:22:30.575 }, 00:22:30.575 { 00:22:30.575 "name": "BaseBdev4", 00:22:30.575 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:30.575 "is_configured": true, 00:22:30.575 "data_offset": 2048, 00:22:30.575 "data_size": 63488 00:22:30.575 } 00:22:30.575 ] 00:22:30.575 }' 00:22:30.575 10:47:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.575 10:47:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:30.575 10:47:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.833 10:47:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:30.833 10:47:57 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:31.092 [2024-07-24 10:47:57.579907] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:31.092 [2024-07-24 10:47:57.580244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:31.092 10:47:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:31.092 [2024-07-24 10:47:57.631835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:22:31.092 [2024-07-24 10:47:57.634963] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:31.092 [2024-07-24 10:47:57.757207] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:31.092 [2024-07-24 10:47:57.759093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:31.350 [2024-07-24 10:47:58.012698] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:31.917 [2024-07-24 10:47:58.294790] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:31.917 [2024-07-24 10:47:58.296661] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:31.917 [2024-07-24 10:47:58.509420] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:31.917 [2024-07-24 10:47:58.510552] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.175 10:47:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.175 [2024-07-24 10:47:58.840198] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:32.175 [2024-07-24 10:47:58.842038] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:32.433 10:47:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.433 "name": "raid_bdev1", 00:22:32.433 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:32.433 "strip_size_kb": 0, 00:22:32.433 "state": "online", 00:22:32.433 "raid_level": "raid1", 00:22:32.433 "superblock": true, 00:22:32.433 "num_base_bdevs": 4, 00:22:32.433 "num_base_bdevs_discovered": 4, 00:22:32.433 "num_base_bdevs_operational": 4, 00:22:32.433 "process": { 00:22:32.433 "type": "rebuild", 00:22:32.433 "target": "spare", 00:22:32.433 "progress": { 00:22:32.433 "blocks": 14336, 00:22:32.433 "percent": 22 00:22:32.433 } 00:22:32.433 }, 00:22:32.433 "base_bdevs_list": [ 00:22:32.433 { 00:22:32.433 "name": "spare", 00:22:32.433 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:32.433 "is_configured": true, 00:22:32.433 "data_offset": 2048, 00:22:32.433 "data_size": 63488 00:22:32.433 }, 00:22:32.433 { 00:22:32.433 "name": "BaseBdev2", 00:22:32.433 "uuid": "b6bfe3ba-19b1-58a5-89d5-6a5d4cdf6336", 00:22:32.433 "is_configured": true, 00:22:32.433 "data_offset": 2048, 00:22:32.433 "data_size": 63488 00:22:32.433 }, 00:22:32.433 { 00:22:32.433 "name": "BaseBdev3", 00:22:32.433 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:32.433 "is_configured": true, 00:22:32.433 "data_offset": 2048, 00:22:32.433 "data_size": 63488 00:22:32.433 }, 00:22:32.433 { 00:22:32.433 "name": "BaseBdev4", 00:22:32.433 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:32.433 "is_configured": true, 00:22:32.433 "data_offset": 2048, 00:22:32.433 "data_size": 63488 00:22:32.433 } 00:22:32.433 ] 00:22:32.433 }' 00:22:32.434 10:47:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.434 10:47:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.434 10:47:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:32.434 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:32.434 10:47:59 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:32.434 [2024-07-24 10:47:59.044604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:32.692 [2024-07-24 10:47:59.284236] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:32.951 [2024-07-24 10:47:59.382659] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:32.951 [2024-07-24 10:47:59.414890] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000026d0 00:22:32.951 [2024-07-24 10:47:59.415029] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002940 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.951 10:47:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.210 [2024-07-24 10:47:59.658977] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:33.210 10:47:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.210 "name": "raid_bdev1", 00:22:33.210 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:33.210 "strip_size_kb": 0, 00:22:33.210 "state": "online", 00:22:33.210 "raid_level": "raid1", 00:22:33.210 "superblock": true, 00:22:33.210 "num_base_bdevs": 4, 00:22:33.210 "num_base_bdevs_discovered": 3, 00:22:33.210 "num_base_bdevs_operational": 3, 00:22:33.210 "process": { 00:22:33.210 "type": "rebuild", 00:22:33.210 "target": "spare", 00:22:33.210 "progress": { 00:22:33.210 "blocks": 26624, 00:22:33.210 "percent": 41 00:22:33.210 } 00:22:33.210 }, 00:22:33.210 "base_bdevs_list": [ 00:22:33.210 { 00:22:33.210 "name": "spare", 00:22:33.210 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:33.210 "is_configured": true, 00:22:33.210 "data_offset": 2048, 00:22:33.210 "data_size": 63488 00:22:33.210 }, 00:22:33.210 { 00:22:33.210 "name": null, 00:22:33.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.210 "is_configured": false, 00:22:33.210 "data_offset": 2048, 00:22:33.210 "data_size": 63488 00:22:33.210 }, 00:22:33.210 { 00:22:33.210 "name": "BaseBdev3", 00:22:33.210 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:33.210 "is_configured": true, 00:22:33.210 "data_offset": 2048, 00:22:33.210 "data_size": 63488 00:22:33.210 }, 00:22:33.210 { 00:22:33.210 "name": "BaseBdev4", 00:22:33.210 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:33.210 "is_configured": true, 00:22:33.210 "data_offset": 2048, 00:22:33.210 "data_size": 63488 00:22:33.210 } 00:22:33.210 ] 00:22:33.210 }' 00:22:33.210 10:47:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.210 10:47:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.210 10:47:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@657 -- # local timeout=559 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.468 10:47:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.726 10:48:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.726 "name": "raid_bdev1", 00:22:33.726 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:33.726 "strip_size_kb": 0, 00:22:33.726 "state": "online", 00:22:33.726 "raid_level": "raid1", 00:22:33.726 "superblock": true, 00:22:33.726 "num_base_bdevs": 4, 00:22:33.726 "num_base_bdevs_discovered": 3, 00:22:33.726 "num_base_bdevs_operational": 3, 00:22:33.726 "process": { 00:22:33.726 "type": "rebuild", 00:22:33.726 "target": "spare", 00:22:33.726 "progress": { 00:22:33.726 "blocks": 34816, 00:22:33.726 "percent": 54 00:22:33.726 } 00:22:33.726 }, 00:22:33.726 "base_bdevs_list": [ 00:22:33.726 { 00:22:33.726 "name": "spare", 00:22:33.726 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:33.726 "is_configured": true, 00:22:33.726 "data_offset": 2048, 00:22:33.726 "data_size": 63488 00:22:33.726 }, 00:22:33.726 { 00:22:33.726 "name": null, 00:22:33.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.726 "is_configured": false, 00:22:33.726 "data_offset": 2048, 00:22:33.726 "data_size": 63488 00:22:33.726 }, 00:22:33.726 { 00:22:33.726 "name": "BaseBdev3", 00:22:33.727 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:33.727 "is_configured": true, 00:22:33.727 "data_offset": 2048, 00:22:33.727 "data_size": 63488 00:22:33.727 }, 00:22:33.727 { 00:22:33.727 "name": "BaseBdev4", 00:22:33.727 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:33.727 "is_configured": true, 00:22:33.727 "data_offset": 2048, 00:22:33.727 "data_size": 63488 00:22:33.727 } 00:22:33.727 ] 00:22:33.727 }' 00:22:33.727 10:48:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.727 10:48:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.727 10:48:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.727 10:48:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.727 10:48:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:33.727 [2024-07-24 10:48:00.316706] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:33.985 [2024-07-24 10:48:00.437356] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:34.244 [2024-07-24 10:48:00.909003] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:34.502 [2024-07-24 10:48:01.150201] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.759 10:48:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.017 10:48:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:35.017 "name": "raid_bdev1", 00:22:35.017 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:35.017 "strip_size_kb": 0, 00:22:35.017 "state": "online", 00:22:35.017 "raid_level": "raid1", 00:22:35.017 "superblock": true, 00:22:35.017 "num_base_bdevs": 4, 00:22:35.017 "num_base_bdevs_discovered": 3, 00:22:35.017 "num_base_bdevs_operational": 3, 00:22:35.017 "process": { 00:22:35.017 "type": "rebuild", 00:22:35.017 "target": "spare", 00:22:35.017 "progress": { 00:22:35.017 "blocks": 55296, 00:22:35.017 "percent": 87 00:22:35.017 } 00:22:35.017 }, 00:22:35.017 "base_bdevs_list": [ 00:22:35.017 { 00:22:35.017 "name": "spare", 00:22:35.017 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:35.017 "is_configured": true, 00:22:35.017 "data_offset": 2048, 00:22:35.017 "data_size": 63488 00:22:35.017 }, 00:22:35.017 { 00:22:35.017 "name": null, 00:22:35.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.017 "is_configured": false, 00:22:35.017 "data_offset": 2048, 00:22:35.017 "data_size": 63488 00:22:35.017 }, 00:22:35.017 { 00:22:35.017 "name": "BaseBdev3", 00:22:35.017 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:35.017 "is_configured": true, 00:22:35.017 "data_offset": 2048, 00:22:35.017 "data_size": 63488 00:22:35.017 }, 00:22:35.017 { 00:22:35.017 "name": "BaseBdev4", 00:22:35.017 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:35.017 "is_configured": true, 00:22:35.017 "data_offset": 2048, 00:22:35.017 "data_size": 63488 00:22:35.017 } 00:22:35.017 ] 00:22:35.017 }' 00:22:35.017 10:48:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:35.017 [2024-07-24 10:48:01.605078] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:35.017 10:48:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.017 10:48:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:35.275 10:48:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.275 10:48:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:35.275 [2024-07-24 10:48:01.811156] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:35.534 [2024-07-24 10:48:02.035283] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:35.534 [2024-07-24 10:48:02.143230] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:35.534 [2024-07-24 10:48:02.147055] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.100 10:48:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.359 10:48:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.359 "name": "raid_bdev1", 00:22:36.359 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:36.359 "strip_size_kb": 0, 00:22:36.359 "state": "online", 00:22:36.359 "raid_level": "raid1", 00:22:36.359 "superblock": true, 00:22:36.359 "num_base_bdevs": 4, 00:22:36.359 "num_base_bdevs_discovered": 3, 00:22:36.359 "num_base_bdevs_operational": 3, 00:22:36.359 "base_bdevs_list": [ 00:22:36.359 { 00:22:36.359 "name": "spare", 00:22:36.359 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:36.359 "is_configured": true, 00:22:36.359 "data_offset": 2048, 00:22:36.359 "data_size": 63488 00:22:36.359 }, 00:22:36.359 { 00:22:36.359 "name": null, 00:22:36.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.359 "is_configured": false, 00:22:36.359 "data_offset": 2048, 00:22:36.359 "data_size": 63488 00:22:36.359 }, 00:22:36.359 { 00:22:36.359 "name": "BaseBdev3", 00:22:36.359 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:36.359 "is_configured": true, 00:22:36.359 "data_offset": 2048, 00:22:36.359 "data_size": 63488 00:22:36.359 }, 00:22:36.359 { 00:22:36.359 "name": "BaseBdev4", 00:22:36.359 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:36.359 "is_configured": true, 00:22:36.359 "data_offset": 2048, 00:22:36.359 "data_size": 63488 00:22:36.359 } 00:22:36.359 ] 00:22:36.359 }' 00:22:36.359 10:48:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@660 -- # break 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.616 10:48:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.872 "name": "raid_bdev1", 00:22:36.872 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:36.872 "strip_size_kb": 0, 00:22:36.872 "state": "online", 00:22:36.872 "raid_level": "raid1", 00:22:36.872 "superblock": true, 00:22:36.872 "num_base_bdevs": 4, 00:22:36.872 "num_base_bdevs_discovered": 3, 00:22:36.872 "num_base_bdevs_operational": 3, 00:22:36.872 "base_bdevs_list": [ 00:22:36.872 { 00:22:36.872 "name": "spare", 00:22:36.872 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:36.872 "is_configured": true, 00:22:36.872 "data_offset": 2048, 00:22:36.872 "data_size": 63488 00:22:36.872 }, 00:22:36.872 { 00:22:36.872 "name": null, 00:22:36.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.872 "is_configured": false, 00:22:36.872 "data_offset": 2048, 00:22:36.872 "data_size": 63488 00:22:36.872 }, 00:22:36.872 { 00:22:36.872 "name": "BaseBdev3", 00:22:36.872 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:36.872 "is_configured": true, 00:22:36.872 "data_offset": 2048, 00:22:36.872 "data_size": 63488 00:22:36.872 }, 00:22:36.872 { 00:22:36.872 "name": "BaseBdev4", 00:22:36.872 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:36.872 "is_configured": true, 00:22:36.872 "data_offset": 2048, 00:22:36.872 "data_size": 63488 00:22:36.872 } 00:22:36.872 ] 00:22:36.872 }' 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.872 10:48:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.182 10:48:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.182 "name": "raid_bdev1", 00:22:37.182 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:37.182 "strip_size_kb": 0, 00:22:37.182 "state": "online", 00:22:37.182 "raid_level": "raid1", 00:22:37.182 "superblock": true, 00:22:37.182 "num_base_bdevs": 4, 00:22:37.182 "num_base_bdevs_discovered": 3, 00:22:37.182 "num_base_bdevs_operational": 3, 00:22:37.182 "base_bdevs_list": [ 00:22:37.182 { 00:22:37.182 "name": "spare", 00:22:37.182 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:37.182 "is_configured": true, 00:22:37.182 "data_offset": 2048, 00:22:37.182 "data_size": 63488 00:22:37.182 }, 00:22:37.182 { 00:22:37.182 "name": null, 00:22:37.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.182 "is_configured": false, 00:22:37.182 "data_offset": 2048, 00:22:37.182 "data_size": 63488 00:22:37.182 }, 00:22:37.182 { 00:22:37.182 "name": "BaseBdev3", 00:22:37.182 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:37.182 "is_configured": true, 00:22:37.182 "data_offset": 2048, 00:22:37.182 "data_size": 63488 00:22:37.182 }, 00:22:37.182 { 00:22:37.182 "name": "BaseBdev4", 00:22:37.182 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:37.182 "is_configured": true, 00:22:37.182 "data_offset": 2048, 00:22:37.182 "data_size": 63488 00:22:37.182 } 00:22:37.182 ] 00:22:37.182 }' 00:22:37.182 10:48:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.182 10:48:03 -- common/autotest_common.sh@10 -- # set +x 00:22:38.119 10:48:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:38.119 [2024-07-24 10:48:04.763671] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.119 [2024-07-24 10:48:04.764029] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:38.377 00:22:38.377 Latency(us) 00:22:38.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.377 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:38.377 raid_bdev1 : 12.44 95.70 287.11 0.00 0.00 14603.48 309.06 123922.62 00:22:38.377 =================================================================================================================== 00:22:38.377 Total : 95.70 287.11 0.00 0.00 14603.48 309.06 123922.62 00:22:38.377 [2024-07-24 10:48:04.817636] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:38.377 [2024-07-24 10:48:04.817878] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:38.377 0 00:22:38.377 [2024-07-24 10:48:04.818135] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:38.377 [2024-07-24 10:48:04.818156] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:38.377 10:48:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.377 10:48:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:38.636 10:48:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:38.636 10:48:05 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:38.636 10:48:05 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@12 -- # local i 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.636 10:48:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:38.895 /dev/nbd0 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:38.895 10:48:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:38.895 10:48:05 -- common/autotest_common.sh@857 -- # local i 00:22:38.895 10:48:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:38.895 10:48:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:38.895 10:48:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:38.895 10:48:05 -- common/autotest_common.sh@861 -- # break 00:22:38.895 10:48:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:38.895 10:48:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:38.895 10:48:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:38.895 1+0 records in 00:22:38.895 1+0 records out 00:22:38.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047584 s, 8.6 MB/s 00:22:38.895 10:48:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.895 10:48:05 -- common/autotest_common.sh@874 -- # size=4096 00:22:38.895 10:48:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.895 10:48:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:38.895 10:48:05 -- common/autotest_common.sh@877 -- # return 0 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.895 10:48:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:38.895 10:48:05 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:38.895 10:48:05 -- bdev/bdev_raid.sh@678 -- # continue 00:22:38.895 10:48:05 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:38.895 10:48:05 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:38.895 10:48:05 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@12 -- # local i 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.895 10:48:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:39.153 /dev/nbd1 00:22:39.153 10:48:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:39.153 10:48:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:39.153 10:48:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:39.153 10:48:05 -- common/autotest_common.sh@857 -- # local i 00:22:39.153 10:48:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:39.153 10:48:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:39.153 10:48:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:39.153 10:48:05 -- common/autotest_common.sh@861 -- # break 00:22:39.153 10:48:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:39.153 10:48:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:39.153 10:48:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:39.153 1+0 records in 00:22:39.153 1+0 records out 00:22:39.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490789 s, 8.3 MB/s 00:22:39.153 10:48:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.411 10:48:05 -- common/autotest_common.sh@874 -- # size=4096 00:22:39.411 10:48:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.411 10:48:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:39.411 10:48:05 -- common/autotest_common.sh@877 -- # return 0 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:39.411 10:48:05 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:39.411 10:48:05 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@51 -- # local i 00:22:39.411 10:48:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:39.412 10:48:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@41 -- # break 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:39.670 10:48:06 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:39.670 10:48:06 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:39.670 10:48:06 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@12 -- # local i 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:39.670 10:48:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:39.928 /dev/nbd1 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:39.928 10:48:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:39.928 10:48:06 -- common/autotest_common.sh@857 -- # local i 00:22:39.928 10:48:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:39.928 10:48:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:39.928 10:48:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:39.928 10:48:06 -- common/autotest_common.sh@861 -- # break 00:22:39.928 10:48:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:39.928 10:48:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:39.928 10:48:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:39.928 1+0 records in 00:22:39.928 1+0 records out 00:22:39.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589956 s, 6.9 MB/s 00:22:39.928 10:48:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.928 10:48:06 -- common/autotest_common.sh@874 -- # size=4096 00:22:39.928 10:48:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:39.928 10:48:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:39.928 10:48:06 -- common/autotest_common.sh@877 -- # return 0 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:39.928 10:48:06 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:39.928 10:48:06 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@51 -- # local i 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:39.928 10:48:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@41 -- # break 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@45 -- # return 0 00:22:40.186 10:48:06 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@51 -- # local i 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:40.186 10:48:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@41 -- # break 00:22:40.753 10:48:07 -- bdev/nbd_common.sh@45 -- # return 0 00:22:40.753 10:48:07 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:40.753 10:48:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:40.753 10:48:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:40.753 10:48:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:41.011 10:48:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:41.270 [2024-07-24 10:48:07.741385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:41.270 [2024-07-24 10:48:07.741798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.270 [2024-07-24 10:48:07.741891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:41.270 [2024-07-24 10:48:07.742165] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.270 [2024-07-24 10:48:07.745133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.270 [2024-07-24 10:48:07.745377] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:41.270 [2024-07-24 10:48:07.745651] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:41.270 [2024-07-24 10:48:07.745830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.270 BaseBdev1 00:22:41.270 10:48:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:41.270 10:48:07 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:41.270 10:48:07 -- bdev/bdev_raid.sh@696 -- # continue 00:22:41.270 10:48:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:41.270 10:48:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:41.270 10:48:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:41.528 10:48:08 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:41.787 [2024-07-24 10:48:08.233945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:41.787 [2024-07-24 10:48:08.234303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:41.787 [2024-07-24 10:48:08.234538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:41.787 [2024-07-24 10:48:08.234732] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:41.787 [2024-07-24 10:48:08.235429] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:41.787 [2024-07-24 10:48:08.235675] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:41.787 [2024-07-24 10:48:08.235925] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:41.787 [2024-07-24 10:48:08.236069] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:41.787 [2024-07-24 10:48:08.236234] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.787 [2024-07-24 10:48:08.236399] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state configuring 00:22:41.787 [2024-07-24 10:48:08.236597] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:41.787 BaseBdev3 00:22:41.787 10:48:08 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:41.787 10:48:08 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:41.787 10:48:08 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:42.045 10:48:08 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:42.045 [2024-07-24 10:48:08.710144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:42.045 [2024-07-24 10:48:08.710524] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.045 [2024-07-24 10:48:08.710729] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:42.045 [2024-07-24 10:48:08.710926] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.045 [2024-07-24 10:48:08.711623] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.045 [2024-07-24 10:48:08.711830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:42.045 [2024-07-24 10:48:08.712069] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:42.045 [2024-07-24 10:48:08.712253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:42.045 BaseBdev4 00:22:42.045 10:48:08 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:42.303 10:48:08 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:42.561 [2024-07-24 10:48:09.194330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:42.561 [2024-07-24 10:48:09.194709] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:42.561 [2024-07-24 10:48:09.194804] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:42.561 [2024-07-24 10:48:09.195137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:42.561 [2024-07-24 10:48:09.195779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:42.561 [2024-07-24 10:48:09.195983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:42.561 [2024-07-24 10:48:09.196246] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:42.561 [2024-07-24 10:48:09.196431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.561 spare 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.561 10:48:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.819 [2024-07-24 10:48:09.296646] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:22:42.819 [2024-07-24 10:48:09.296900] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:42.819 [2024-07-24 10:48:09.297204] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033bc0 00:22:42.819 [2024-07-24 10:48:09.297888] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:22:42.819 [2024-07-24 10:48:09.298027] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:22:42.819 [2024-07-24 10:48:09.298343] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:42.819 10:48:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.819 "name": "raid_bdev1", 00:22:42.819 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:42.819 "strip_size_kb": 0, 00:22:42.819 "state": "online", 00:22:42.819 "raid_level": "raid1", 00:22:42.819 "superblock": true, 00:22:42.819 "num_base_bdevs": 4, 00:22:42.819 "num_base_bdevs_discovered": 3, 00:22:42.819 "num_base_bdevs_operational": 3, 00:22:42.819 "base_bdevs_list": [ 00:22:42.819 { 00:22:42.819 "name": "spare", 00:22:42.819 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:42.819 "is_configured": true, 00:22:42.819 "data_offset": 2048, 00:22:42.819 "data_size": 63488 00:22:42.819 }, 00:22:42.819 { 00:22:42.819 "name": null, 00:22:42.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.819 "is_configured": false, 00:22:42.819 "data_offset": 2048, 00:22:42.819 "data_size": 63488 00:22:42.819 }, 00:22:42.819 { 00:22:42.819 "name": "BaseBdev3", 00:22:42.819 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:42.820 "is_configured": true, 00:22:42.820 "data_offset": 2048, 00:22:42.820 "data_size": 63488 00:22:42.820 }, 00:22:42.820 { 00:22:42.820 "name": "BaseBdev4", 00:22:42.820 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:42.820 "is_configured": true, 00:22:42.820 "data_offset": 2048, 00:22:42.820 "data_size": 63488 00:22:42.820 } 00:22:42.820 ] 00:22:42.820 }' 00:22:42.820 10:48:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.820 10:48:09 -- common/autotest_common.sh@10 -- # set +x 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.753 10:48:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:44.011 "name": "raid_bdev1", 00:22:44.011 "uuid": "f0daed88-2222-4436-a72e-d3f75095b87e", 00:22:44.011 "strip_size_kb": 0, 00:22:44.011 "state": "online", 00:22:44.011 "raid_level": "raid1", 00:22:44.011 "superblock": true, 00:22:44.011 "num_base_bdevs": 4, 00:22:44.011 "num_base_bdevs_discovered": 3, 00:22:44.011 "num_base_bdevs_operational": 3, 00:22:44.011 "base_bdevs_list": [ 00:22:44.011 { 00:22:44.011 "name": "spare", 00:22:44.011 "uuid": "910dd219-ba1f-56e8-9e34-f1874d4aafb0", 00:22:44.011 "is_configured": true, 00:22:44.011 "data_offset": 2048, 00:22:44.011 "data_size": 63488 00:22:44.011 }, 00:22:44.011 { 00:22:44.011 "name": null, 00:22:44.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.011 "is_configured": false, 00:22:44.011 "data_offset": 2048, 00:22:44.011 "data_size": 63488 00:22:44.011 }, 00:22:44.011 { 00:22:44.011 "name": "BaseBdev3", 00:22:44.011 "uuid": "48606fe4-9b29-5d44-b30b-ac79b6464e10", 00:22:44.011 "is_configured": true, 00:22:44.011 "data_offset": 2048, 00:22:44.011 "data_size": 63488 00:22:44.011 }, 00:22:44.011 { 00:22:44.011 "name": "BaseBdev4", 00:22:44.011 "uuid": "94968bc2-5bc9-524a-ae1c-51346e2ffd9e", 00:22:44.011 "is_configured": true, 00:22:44.011 "data_offset": 2048, 00:22:44.011 "data_size": 63488 00:22:44.011 } 00:22:44.011 ] 00:22:44.011 }' 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.011 10:48:10 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:44.270 10:48:10 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.270 10:48:10 -- bdev/bdev_raid.sh@709 -- # killprocess 137336 00:22:44.270 10:48:10 -- common/autotest_common.sh@926 -- # '[' -z 137336 ']' 00:22:44.270 10:48:10 -- common/autotest_common.sh@930 -- # kill -0 137336 00:22:44.270 10:48:10 -- common/autotest_common.sh@931 -- # uname 00:22:44.270 10:48:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:44.270 10:48:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137336 00:22:44.270 killing process with pid 137336 00:22:44.270 Received shutdown signal, test time was about 18.508738 seconds 00:22:44.270 00:22:44.270 Latency(us) 00:22:44.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.270 =================================================================================================================== 00:22:44.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.270 10:48:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:44.270 10:48:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:44.270 10:48:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137336' 00:22:44.270 10:48:10 -- common/autotest_common.sh@945 -- # kill 137336 00:22:44.270 [2024-07-24 10:48:10.875365] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:44.270 10:48:10 -- common/autotest_common.sh@950 -- # wait 137336 00:22:44.270 [2024-07-24 10:48:10.875520] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.270 [2024-07-24 10:48:10.875647] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.270 [2024-07-24 10:48:10.875664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:22:44.270 [2024-07-24 10:48:10.935672] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:44.527 ************************************ 00:22:44.527 END TEST raid_rebuild_test_sb_io 00:22:44.527 ************************************ 00:22:44.527 10:48:11 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:44.527 00:22:44.527 real 0m24.961s 00:22:44.527 user 0m41.536s 00:22:44.527 sys 0m3.351s 00:22:44.527 10:48:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.527 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:44.799 10:48:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:44.799 10:48:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.799 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:22:44.799 ************************************ 00:22:44.799 START TEST raid5f_state_function_test 00:22:44.799 ************************************ 00:22:44.799 10:48:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=137968 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 137968' 00:22:44.799 Process raid pid: 137968 00:22:44.799 10:48:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 137968 /var/tmp/spdk-raid.sock 00:22:44.799 10:48:11 -- common/autotest_common.sh@819 -- # '[' -z 137968 ']' 00:22:44.799 10:48:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:44.799 10:48:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:44.799 10:48:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:44.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:44.799 10:48:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:44.799 10:48:11 -- common/autotest_common.sh@10 -- # set +x 00:22:44.799 [2024-07-24 10:48:11.327081] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:22:44.799 [2024-07-24 10:48:11.327563] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.799 [2024-07-24 10:48:11.473607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.057 [2024-07-24 10:48:11.575237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.057 [2024-07-24 10:48:11.634044] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.621 10:48:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.622 10:48:12 -- common/autotest_common.sh@852 -- # return 0 00:22:45.622 10:48:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:46.187 [2024-07-24 10:48:12.565624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.187 [2024-07-24 10:48:12.566011] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.187 [2024-07-24 10:48:12.566139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.187 [2024-07-24 10:48:12.566207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.187 [2024-07-24 10:48:12.566319] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.187 [2024-07-24 10:48:12.566415] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.187 "name": "Existed_Raid", 00:22:46.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.187 "strip_size_kb": 64, 00:22:46.187 "state": "configuring", 00:22:46.187 "raid_level": "raid5f", 00:22:46.187 "superblock": false, 00:22:46.187 "num_base_bdevs": 3, 00:22:46.187 "num_base_bdevs_discovered": 0, 00:22:46.187 "num_base_bdevs_operational": 3, 00:22:46.187 "base_bdevs_list": [ 00:22:46.187 { 00:22:46.187 "name": "BaseBdev1", 00:22:46.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.187 "is_configured": false, 00:22:46.187 "data_offset": 0, 00:22:46.187 "data_size": 0 00:22:46.187 }, 00:22:46.187 { 00:22:46.187 "name": "BaseBdev2", 00:22:46.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.187 "is_configured": false, 00:22:46.187 "data_offset": 0, 00:22:46.187 "data_size": 0 00:22:46.187 }, 00:22:46.187 { 00:22:46.187 "name": "BaseBdev3", 00:22:46.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.187 "is_configured": false, 00:22:46.187 "data_offset": 0, 00:22:46.187 "data_size": 0 00:22:46.187 } 00:22:46.187 ] 00:22:46.187 }' 00:22:46.187 10:48:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.187 10:48:12 -- common/autotest_common.sh@10 -- # set +x 00:22:47.131 10:48:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:47.131 [2024-07-24 10:48:13.701624] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:47.131 [2024-07-24 10:48:13.701936] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:47.131 10:48:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:47.411 [2024-07-24 10:48:13.945740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:47.411 [2024-07-24 10:48:13.946068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:47.411 [2024-07-24 10:48:13.946198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:47.411 [2024-07-24 10:48:13.946268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:47.411 [2024-07-24 10:48:13.946444] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:47.411 [2024-07-24 10:48:13.946587] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:47.411 10:48:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:47.669 [2024-07-24 10:48:14.229347] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:47.669 BaseBdev1 00:22:47.669 10:48:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:47.669 10:48:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:47.669 10:48:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:47.669 10:48:14 -- common/autotest_common.sh@889 -- # local i 00:22:47.669 10:48:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:47.669 10:48:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:47.669 10:48:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.925 10:48:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:48.182 [ 00:22:48.182 { 00:22:48.182 "name": "BaseBdev1", 00:22:48.182 "aliases": [ 00:22:48.182 "0681b087-4da1-46c4-aaa7-52b88ef498bf" 00:22:48.182 ], 00:22:48.182 "product_name": "Malloc disk", 00:22:48.182 "block_size": 512, 00:22:48.182 "num_blocks": 65536, 00:22:48.182 "uuid": "0681b087-4da1-46c4-aaa7-52b88ef498bf", 00:22:48.182 "assigned_rate_limits": { 00:22:48.182 "rw_ios_per_sec": 0, 00:22:48.182 "rw_mbytes_per_sec": 0, 00:22:48.182 "r_mbytes_per_sec": 0, 00:22:48.182 "w_mbytes_per_sec": 0 00:22:48.182 }, 00:22:48.182 "claimed": true, 00:22:48.182 "claim_type": "exclusive_write", 00:22:48.182 "zoned": false, 00:22:48.182 "supported_io_types": { 00:22:48.182 "read": true, 00:22:48.182 "write": true, 00:22:48.182 "unmap": true, 00:22:48.182 "write_zeroes": true, 00:22:48.182 "flush": true, 00:22:48.182 "reset": true, 00:22:48.182 "compare": false, 00:22:48.182 "compare_and_write": false, 00:22:48.182 "abort": true, 00:22:48.183 "nvme_admin": false, 00:22:48.183 "nvme_io": false 00:22:48.183 }, 00:22:48.183 "memory_domains": [ 00:22:48.183 { 00:22:48.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.183 "dma_device_type": 2 00:22:48.183 } 00:22:48.183 ], 00:22:48.183 "driver_specific": {} 00:22:48.183 } 00:22:48.183 ] 00:22:48.183 10:48:14 -- common/autotest_common.sh@895 -- # return 0 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.183 10:48:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.440 10:48:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.440 "name": "Existed_Raid", 00:22:48.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.441 "strip_size_kb": 64, 00:22:48.441 "state": "configuring", 00:22:48.441 "raid_level": "raid5f", 00:22:48.441 "superblock": false, 00:22:48.441 "num_base_bdevs": 3, 00:22:48.441 "num_base_bdevs_discovered": 1, 00:22:48.441 "num_base_bdevs_operational": 3, 00:22:48.441 "base_bdevs_list": [ 00:22:48.441 { 00:22:48.441 "name": "BaseBdev1", 00:22:48.441 "uuid": "0681b087-4da1-46c4-aaa7-52b88ef498bf", 00:22:48.441 "is_configured": true, 00:22:48.441 "data_offset": 0, 00:22:48.441 "data_size": 65536 00:22:48.441 }, 00:22:48.441 { 00:22:48.441 "name": "BaseBdev2", 00:22:48.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.441 "is_configured": false, 00:22:48.441 "data_offset": 0, 00:22:48.441 "data_size": 0 00:22:48.441 }, 00:22:48.441 { 00:22:48.441 "name": "BaseBdev3", 00:22:48.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.441 "is_configured": false, 00:22:48.441 "data_offset": 0, 00:22:48.441 "data_size": 0 00:22:48.441 } 00:22:48.441 ] 00:22:48.441 }' 00:22:48.441 10:48:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.441 10:48:14 -- common/autotest_common.sh@10 -- # set +x 00:22:49.006 10:48:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:49.263 [2024-07-24 10:48:15.797846] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:49.264 [2024-07-24 10:48:15.798231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:22:49.264 10:48:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:49.264 10:48:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:49.521 [2024-07-24 10:48:16.054017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:49.521 [2024-07-24 10:48:16.056834] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:49.521 [2024-07-24 10:48:16.057057] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:49.521 [2024-07-24 10:48:16.057187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:49.521 [2024-07-24 10:48:16.057342] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.521 10:48:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.779 10:48:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.779 "name": "Existed_Raid", 00:22:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.779 "strip_size_kb": 64, 00:22:49.779 "state": "configuring", 00:22:49.779 "raid_level": "raid5f", 00:22:49.779 "superblock": false, 00:22:49.779 "num_base_bdevs": 3, 00:22:49.779 "num_base_bdevs_discovered": 1, 00:22:49.779 "num_base_bdevs_operational": 3, 00:22:49.779 "base_bdevs_list": [ 00:22:49.779 { 00:22:49.779 "name": "BaseBdev1", 00:22:49.779 "uuid": "0681b087-4da1-46c4-aaa7-52b88ef498bf", 00:22:49.779 "is_configured": true, 00:22:49.779 "data_offset": 0, 00:22:49.779 "data_size": 65536 00:22:49.779 }, 00:22:49.779 { 00:22:49.779 "name": "BaseBdev2", 00:22:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.779 "is_configured": false, 00:22:49.779 "data_offset": 0, 00:22:49.779 "data_size": 0 00:22:49.779 }, 00:22:49.779 { 00:22:49.779 "name": "BaseBdev3", 00:22:49.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.779 "is_configured": false, 00:22:49.779 "data_offset": 0, 00:22:49.779 "data_size": 0 00:22:49.779 } 00:22:49.779 ] 00:22:49.779 }' 00:22:49.779 10:48:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.779 10:48:16 -- common/autotest_common.sh@10 -- # set +x 00:22:50.345 10:48:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:50.604 [2024-07-24 10:48:17.265144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.604 BaseBdev2 00:22:50.604 10:48:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:50.604 10:48:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:22:50.604 10:48:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:50.604 10:48:17 -- common/autotest_common.sh@889 -- # local i 00:22:50.604 10:48:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:50.604 10:48:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:50.604 10:48:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:51.185 10:48:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:51.185 [ 00:22:51.185 { 00:22:51.185 "name": "BaseBdev2", 00:22:51.185 "aliases": [ 00:22:51.185 "2b551ba2-514b-4459-86d1-1c76d14d75ca" 00:22:51.185 ], 00:22:51.185 "product_name": "Malloc disk", 00:22:51.185 "block_size": 512, 00:22:51.185 "num_blocks": 65536, 00:22:51.185 "uuid": "2b551ba2-514b-4459-86d1-1c76d14d75ca", 00:22:51.185 "assigned_rate_limits": { 00:22:51.185 "rw_ios_per_sec": 0, 00:22:51.185 "rw_mbytes_per_sec": 0, 00:22:51.185 "r_mbytes_per_sec": 0, 00:22:51.185 "w_mbytes_per_sec": 0 00:22:51.185 }, 00:22:51.185 "claimed": true, 00:22:51.185 "claim_type": "exclusive_write", 00:22:51.185 "zoned": false, 00:22:51.185 "supported_io_types": { 00:22:51.185 "read": true, 00:22:51.185 "write": true, 00:22:51.185 "unmap": true, 00:22:51.185 "write_zeroes": true, 00:22:51.185 "flush": true, 00:22:51.185 "reset": true, 00:22:51.185 "compare": false, 00:22:51.185 "compare_and_write": false, 00:22:51.185 "abort": true, 00:22:51.185 "nvme_admin": false, 00:22:51.185 "nvme_io": false 00:22:51.185 }, 00:22:51.185 "memory_domains": [ 00:22:51.185 { 00:22:51.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.185 "dma_device_type": 2 00:22:51.185 } 00:22:51.185 ], 00:22:51.185 "driver_specific": {} 00:22:51.185 } 00:22:51.185 ] 00:22:51.185 10:48:17 -- common/autotest_common.sh@895 -- # return 0 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.185 10:48:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.486 10:48:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.486 "name": "Existed_Raid", 00:22:51.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.486 "strip_size_kb": 64, 00:22:51.486 "state": "configuring", 00:22:51.486 "raid_level": "raid5f", 00:22:51.486 "superblock": false, 00:22:51.486 "num_base_bdevs": 3, 00:22:51.486 "num_base_bdevs_discovered": 2, 00:22:51.486 "num_base_bdevs_operational": 3, 00:22:51.486 "base_bdevs_list": [ 00:22:51.486 { 00:22:51.486 "name": "BaseBdev1", 00:22:51.486 "uuid": "0681b087-4da1-46c4-aaa7-52b88ef498bf", 00:22:51.486 "is_configured": true, 00:22:51.486 "data_offset": 0, 00:22:51.486 "data_size": 65536 00:22:51.486 }, 00:22:51.486 { 00:22:51.486 "name": "BaseBdev2", 00:22:51.486 "uuid": "2b551ba2-514b-4459-86d1-1c76d14d75ca", 00:22:51.486 "is_configured": true, 00:22:51.486 "data_offset": 0, 00:22:51.486 "data_size": 65536 00:22:51.486 }, 00:22:51.486 { 00:22:51.486 "name": "BaseBdev3", 00:22:51.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.486 "is_configured": false, 00:22:51.486 "data_offset": 0, 00:22:51.486 "data_size": 0 00:22:51.486 } 00:22:51.486 ] 00:22:51.486 }' 00:22:51.486 10:48:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.486 10:48:18 -- common/autotest_common.sh@10 -- # set +x 00:22:52.419 10:48:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:52.419 [2024-07-24 10:48:19.062362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:52.419 [2024-07-24 10:48:19.062652] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:22:52.419 [2024-07-24 10:48:19.062800] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:52.419 [2024-07-24 10:48:19.063050] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002050 00:22:52.419 [2024-07-24 10:48:19.064015] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:22:52.419 [2024-07-24 10:48:19.064154] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:22:52.419 [2024-07-24 10:48:19.064567] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.419 BaseBdev3 00:22:52.419 10:48:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:52.419 10:48:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:22:52.419 10:48:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:52.419 10:48:19 -- common/autotest_common.sh@889 -- # local i 00:22:52.419 10:48:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:52.419 10:48:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:52.419 10:48:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:52.676 10:48:19 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:52.934 [ 00:22:52.934 { 00:22:52.934 "name": "BaseBdev3", 00:22:52.934 "aliases": [ 00:22:52.934 "7aa66aa8-2603-4e78-b288-8cfafec3abcc" 00:22:52.934 ], 00:22:52.934 "product_name": "Malloc disk", 00:22:52.934 "block_size": 512, 00:22:52.934 "num_blocks": 65536, 00:22:52.934 "uuid": "7aa66aa8-2603-4e78-b288-8cfafec3abcc", 00:22:52.934 "assigned_rate_limits": { 00:22:52.934 "rw_ios_per_sec": 0, 00:22:52.934 "rw_mbytes_per_sec": 0, 00:22:52.934 "r_mbytes_per_sec": 0, 00:22:52.934 "w_mbytes_per_sec": 0 00:22:52.934 }, 00:22:52.934 "claimed": true, 00:22:52.934 "claim_type": "exclusive_write", 00:22:52.934 "zoned": false, 00:22:52.934 "supported_io_types": { 00:22:52.934 "read": true, 00:22:52.934 "write": true, 00:22:52.934 "unmap": true, 00:22:52.934 "write_zeroes": true, 00:22:52.934 "flush": true, 00:22:52.934 "reset": true, 00:22:52.934 "compare": false, 00:22:52.934 "compare_and_write": false, 00:22:52.934 "abort": true, 00:22:52.934 "nvme_admin": false, 00:22:52.934 "nvme_io": false 00:22:52.934 }, 00:22:52.934 "memory_domains": [ 00:22:52.934 { 00:22:52.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.934 "dma_device_type": 2 00:22:52.934 } 00:22:52.934 ], 00:22:52.934 "driver_specific": {} 00:22:52.934 } 00:22:52.934 ] 00:22:52.934 10:48:19 -- common/autotest_common.sh@895 -- # return 0 00:22:52.934 10:48:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:52.934 10:48:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.191 10:48:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.192 10:48:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.449 10:48:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.449 "name": "Existed_Raid", 00:22:53.449 "uuid": "e46ec20e-511f-437d-975d-294f0308d924", 00:22:53.449 "strip_size_kb": 64, 00:22:53.449 "state": "online", 00:22:53.449 "raid_level": "raid5f", 00:22:53.449 "superblock": false, 00:22:53.449 "num_base_bdevs": 3, 00:22:53.449 "num_base_bdevs_discovered": 3, 00:22:53.449 "num_base_bdevs_operational": 3, 00:22:53.449 "base_bdevs_list": [ 00:22:53.449 { 00:22:53.449 "name": "BaseBdev1", 00:22:53.449 "uuid": "0681b087-4da1-46c4-aaa7-52b88ef498bf", 00:22:53.449 "is_configured": true, 00:22:53.449 "data_offset": 0, 00:22:53.449 "data_size": 65536 00:22:53.449 }, 00:22:53.449 { 00:22:53.449 "name": "BaseBdev2", 00:22:53.449 "uuid": "2b551ba2-514b-4459-86d1-1c76d14d75ca", 00:22:53.449 "is_configured": true, 00:22:53.449 "data_offset": 0, 00:22:53.449 "data_size": 65536 00:22:53.449 }, 00:22:53.449 { 00:22:53.449 "name": "BaseBdev3", 00:22:53.449 "uuid": "7aa66aa8-2603-4e78-b288-8cfafec3abcc", 00:22:53.449 "is_configured": true, 00:22:53.449 "data_offset": 0, 00:22:53.449 "data_size": 65536 00:22:53.449 } 00:22:53.449 ] 00:22:53.449 }' 00:22:53.449 10:48:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.449 10:48:19 -- common/autotest_common.sh@10 -- # set +x 00:22:54.014 10:48:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:54.272 [2024-07-24 10:48:20.780441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.272 10:48:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:54.530 10:48:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:54.530 "name": "Existed_Raid", 00:22:54.530 "uuid": "e46ec20e-511f-437d-975d-294f0308d924", 00:22:54.530 "strip_size_kb": 64, 00:22:54.530 "state": "online", 00:22:54.530 "raid_level": "raid5f", 00:22:54.530 "superblock": false, 00:22:54.530 "num_base_bdevs": 3, 00:22:54.530 "num_base_bdevs_discovered": 2, 00:22:54.530 "num_base_bdevs_operational": 2, 00:22:54.530 "base_bdevs_list": [ 00:22:54.530 { 00:22:54.530 "name": null, 00:22:54.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:54.530 "is_configured": false, 00:22:54.530 "data_offset": 0, 00:22:54.530 "data_size": 65536 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "name": "BaseBdev2", 00:22:54.530 "uuid": "2b551ba2-514b-4459-86d1-1c76d14d75ca", 00:22:54.530 "is_configured": true, 00:22:54.530 "data_offset": 0, 00:22:54.530 "data_size": 65536 00:22:54.530 }, 00:22:54.530 { 00:22:54.530 "name": "BaseBdev3", 00:22:54.530 "uuid": "7aa66aa8-2603-4e78-b288-8cfafec3abcc", 00:22:54.530 "is_configured": true, 00:22:54.530 "data_offset": 0, 00:22:54.530 "data_size": 65536 00:22:54.530 } 00:22:54.530 ] 00:22:54.530 }' 00:22:54.530 10:48:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:54.530 10:48:21 -- common/autotest_common.sh@10 -- # set +x 00:22:55.094 10:48:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:55.094 10:48:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:55.094 10:48:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.094 10:48:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:55.352 10:48:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:55.352 10:48:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:55.352 10:48:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:55.610 [2024-07-24 10:48:22.196590] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:55.610 [2024-07-24 10:48:22.197000] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.610 [2024-07-24 10:48:22.197227] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.610 10:48:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:55.610 10:48:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:55.610 10:48:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:55.610 10:48:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.868 10:48:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:55.868 10:48:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:55.868 10:48:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:56.126 [2024-07-24 10:48:22.743818] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:56.126 [2024-07-24 10:48:22.746246] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:22:56.126 10:48:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:56.126 10:48:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:56.126 10:48:22 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.126 10:48:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:56.383 10:48:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:56.383 10:48:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:56.383 10:48:23 -- bdev/bdev_raid.sh@287 -- # killprocess 137968 00:22:56.383 10:48:23 -- common/autotest_common.sh@926 -- # '[' -z 137968 ']' 00:22:56.383 10:48:23 -- common/autotest_common.sh@930 -- # kill -0 137968 00:22:56.383 10:48:23 -- common/autotest_common.sh@931 -- # uname 00:22:56.383 10:48:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:56.383 10:48:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 137968 00:22:56.383 10:48:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:56.383 10:48:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:56.383 10:48:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 137968' 00:22:56.383 killing process with pid 137968 00:22:56.383 10:48:23 -- common/autotest_common.sh@945 -- # kill 137968 00:22:56.383 10:48:23 -- common/autotest_common.sh@950 -- # wait 137968 00:22:56.383 [2024-07-24 10:48:23.054892] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:56.383 [2024-07-24 10:48:23.055048] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:56.949 00:22:56.949 real 0m12.135s 00:22:56.949 user 0m22.131s 00:22:56.949 sys 0m1.560s 00:22:56.949 10:48:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.949 10:48:23 -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 ************************************ 00:22:56.949 END TEST raid5f_state_function_test 00:22:56.949 ************************************ 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:56.949 10:48:23 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:22:56.949 10:48:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:56.949 10:48:23 -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 ************************************ 00:22:56.949 START TEST raid5f_state_function_test_sb 00:22:56.949 ************************************ 00:22:56.949 10:48:23 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=138347 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 138347' 00:22:56.949 Process raid pid: 138347 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:56.949 10:48:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 138347 /var/tmp/spdk-raid.sock 00:22:56.949 10:48:23 -- common/autotest_common.sh@819 -- # '[' -z 138347 ']' 00:22:56.949 10:48:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:56.949 10:48:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:56.949 10:48:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:56.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:56.949 10:48:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:56.949 10:48:23 -- common/autotest_common.sh@10 -- # set +x 00:22:56.949 [2024-07-24 10:48:23.511399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:22:56.949 [2024-07-24 10:48:23.511881] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.207 [2024-07-24 10:48:23.658298] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.207 [2024-07-24 10:48:23.793363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.207 [2024-07-24 10:48:23.872562] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:57.772 10:48:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:57.772 10:48:24 -- common/autotest_common.sh@852 -- # return 0 00:22:57.772 10:48:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:58.030 [2024-07-24 10:48:24.659246] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:58.030 [2024-07-24 10:48:24.659671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:58.030 [2024-07-24 10:48:24.659835] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:58.030 [2024-07-24 10:48:24.659908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:58.030 [2024-07-24 10:48:24.660132] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:58.030 [2024-07-24 10:48:24.660243] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.030 10:48:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.288 10:48:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:58.288 "name": "Existed_Raid", 00:22:58.288 "uuid": "aa21ba3f-d94b-4ef4-a17a-6a1d5ae2500b", 00:22:58.288 "strip_size_kb": 64, 00:22:58.288 "state": "configuring", 00:22:58.288 "raid_level": "raid5f", 00:22:58.288 "superblock": true, 00:22:58.288 "num_base_bdevs": 3, 00:22:58.288 "num_base_bdevs_discovered": 0, 00:22:58.288 "num_base_bdevs_operational": 3, 00:22:58.288 "base_bdevs_list": [ 00:22:58.288 { 00:22:58.288 "name": "BaseBdev1", 00:22:58.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.288 "is_configured": false, 00:22:58.288 "data_offset": 0, 00:22:58.288 "data_size": 0 00:22:58.288 }, 00:22:58.288 { 00:22:58.288 "name": "BaseBdev2", 00:22:58.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.288 "is_configured": false, 00:22:58.288 "data_offset": 0, 00:22:58.288 "data_size": 0 00:22:58.288 }, 00:22:58.288 { 00:22:58.288 "name": "BaseBdev3", 00:22:58.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.288 "is_configured": false, 00:22:58.288 "data_offset": 0, 00:22:58.288 "data_size": 0 00:22:58.288 } 00:22:58.288 ] 00:22:58.288 }' 00:22:58.288 10:48:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:58.288 10:48:24 -- common/autotest_common.sh@10 -- # set +x 00:22:59.221 10:48:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:59.479 [2024-07-24 10:48:25.907319] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:59.479 [2024-07-24 10:48:25.907794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:22:59.479 10:48:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:59.737 [2024-07-24 10:48:26.207515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:59.737 [2024-07-24 10:48:26.207927] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:59.737 [2024-07-24 10:48:26.208062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:59.737 [2024-07-24 10:48:26.208136] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:59.737 [2024-07-24 10:48:26.208319] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:59.737 [2024-07-24 10:48:26.208513] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:59.737 10:48:26 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:59.995 [2024-07-24 10:48:26.455502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.995 BaseBdev1 00:22:59.995 10:48:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:59.995 10:48:26 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:22:59.995 10:48:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:22:59.995 10:48:26 -- common/autotest_common.sh@889 -- # local i 00:22:59.995 10:48:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:22:59.995 10:48:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:22:59.995 10:48:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.252 10:48:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:00.511 [ 00:23:00.511 { 00:23:00.511 "name": "BaseBdev1", 00:23:00.511 "aliases": [ 00:23:00.511 "0edee334-b874-458e-be1a-d2706e83a385" 00:23:00.511 ], 00:23:00.511 "product_name": "Malloc disk", 00:23:00.511 "block_size": 512, 00:23:00.511 "num_blocks": 65536, 00:23:00.511 "uuid": "0edee334-b874-458e-be1a-d2706e83a385", 00:23:00.511 "assigned_rate_limits": { 00:23:00.511 "rw_ios_per_sec": 0, 00:23:00.511 "rw_mbytes_per_sec": 0, 00:23:00.511 "r_mbytes_per_sec": 0, 00:23:00.511 "w_mbytes_per_sec": 0 00:23:00.511 }, 00:23:00.511 "claimed": true, 00:23:00.511 "claim_type": "exclusive_write", 00:23:00.511 "zoned": false, 00:23:00.511 "supported_io_types": { 00:23:00.511 "read": true, 00:23:00.511 "write": true, 00:23:00.511 "unmap": true, 00:23:00.511 "write_zeroes": true, 00:23:00.511 "flush": true, 00:23:00.511 "reset": true, 00:23:00.511 "compare": false, 00:23:00.511 "compare_and_write": false, 00:23:00.511 "abort": true, 00:23:00.511 "nvme_admin": false, 00:23:00.511 "nvme_io": false 00:23:00.511 }, 00:23:00.511 "memory_domains": [ 00:23:00.511 { 00:23:00.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.511 "dma_device_type": 2 00:23:00.511 } 00:23:00.511 ], 00:23:00.511 "driver_specific": {} 00:23:00.511 } 00:23:00.511 ] 00:23:00.511 10:48:26 -- common/autotest_common.sh@895 -- # return 0 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.511 10:48:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.511 10:48:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.511 10:48:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.511 10:48:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.769 10:48:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.769 "name": "Existed_Raid", 00:23:00.769 "uuid": "c8056de7-636f-4def-9f09-2547d85fb556", 00:23:00.769 "strip_size_kb": 64, 00:23:00.769 "state": "configuring", 00:23:00.769 "raid_level": "raid5f", 00:23:00.769 "superblock": true, 00:23:00.769 "num_base_bdevs": 3, 00:23:00.769 "num_base_bdevs_discovered": 1, 00:23:00.769 "num_base_bdevs_operational": 3, 00:23:00.769 "base_bdevs_list": [ 00:23:00.769 { 00:23:00.769 "name": "BaseBdev1", 00:23:00.769 "uuid": "0edee334-b874-458e-be1a-d2706e83a385", 00:23:00.769 "is_configured": true, 00:23:00.769 "data_offset": 2048, 00:23:00.769 "data_size": 63488 00:23:00.769 }, 00:23:00.769 { 00:23:00.769 "name": "BaseBdev2", 00:23:00.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.769 "is_configured": false, 00:23:00.769 "data_offset": 0, 00:23:00.769 "data_size": 0 00:23:00.769 }, 00:23:00.769 { 00:23:00.769 "name": "BaseBdev3", 00:23:00.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.769 "is_configured": false, 00:23:00.769 "data_offset": 0, 00:23:00.769 "data_size": 0 00:23:00.769 } 00:23:00.769 ] 00:23:00.769 }' 00:23:00.769 10:48:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.769 10:48:27 -- common/autotest_common.sh@10 -- # set +x 00:23:01.335 10:48:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:01.594 [2024-07-24 10:48:28.188016] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:01.594 [2024-07-24 10:48:28.188382] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:23:01.594 10:48:28 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:01.594 10:48:28 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:01.852 10:48:28 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:02.110 BaseBdev1 00:23:02.110 10:48:28 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:02.110 10:48:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:02.110 10:48:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:02.110 10:48:28 -- common/autotest_common.sh@889 -- # local i 00:23:02.110 10:48:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:02.110 10:48:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:02.110 10:48:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.677 10:48:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:02.677 [ 00:23:02.677 { 00:23:02.677 "name": "BaseBdev1", 00:23:02.677 "aliases": [ 00:23:02.677 "f0c50201-e3b1-4384-a143-aa8dfe6d705b" 00:23:02.677 ], 00:23:02.677 "product_name": "Malloc disk", 00:23:02.677 "block_size": 512, 00:23:02.677 "num_blocks": 65536, 00:23:02.677 "uuid": "f0c50201-e3b1-4384-a143-aa8dfe6d705b", 00:23:02.677 "assigned_rate_limits": { 00:23:02.677 "rw_ios_per_sec": 0, 00:23:02.677 "rw_mbytes_per_sec": 0, 00:23:02.677 "r_mbytes_per_sec": 0, 00:23:02.677 "w_mbytes_per_sec": 0 00:23:02.677 }, 00:23:02.677 "claimed": false, 00:23:02.677 "zoned": false, 00:23:02.678 "supported_io_types": { 00:23:02.678 "read": true, 00:23:02.678 "write": true, 00:23:02.678 "unmap": true, 00:23:02.678 "write_zeroes": true, 00:23:02.678 "flush": true, 00:23:02.678 "reset": true, 00:23:02.678 "compare": false, 00:23:02.678 "compare_and_write": false, 00:23:02.678 "abort": true, 00:23:02.678 "nvme_admin": false, 00:23:02.678 "nvme_io": false 00:23:02.678 }, 00:23:02.678 "memory_domains": [ 00:23:02.678 { 00:23:02.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.678 "dma_device_type": 2 00:23:02.678 } 00:23:02.678 ], 00:23:02.678 "driver_specific": {} 00:23:02.678 } 00:23:02.678 ] 00:23:02.678 10:48:29 -- common/autotest_common.sh@895 -- # return 0 00:23:02.678 10:48:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:02.936 [2024-07-24 10:48:29.567697] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.936 [2024-07-24 10:48:29.570540] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:02.936 [2024-07-24 10:48:29.570748] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:02.936 [2024-07-24 10:48:29.570879] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:02.936 [2024-07-24 10:48:29.570953] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.936 10:48:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.194 10:48:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.194 "name": "Existed_Raid", 00:23:03.194 "uuid": "f7525cd8-110c-4231-994f-efb01b220d12", 00:23:03.194 "strip_size_kb": 64, 00:23:03.194 "state": "configuring", 00:23:03.194 "raid_level": "raid5f", 00:23:03.194 "superblock": true, 00:23:03.194 "num_base_bdevs": 3, 00:23:03.194 "num_base_bdevs_discovered": 1, 00:23:03.194 "num_base_bdevs_operational": 3, 00:23:03.194 "base_bdevs_list": [ 00:23:03.194 { 00:23:03.194 "name": "BaseBdev1", 00:23:03.194 "uuid": "f0c50201-e3b1-4384-a143-aa8dfe6d705b", 00:23:03.194 "is_configured": true, 00:23:03.194 "data_offset": 2048, 00:23:03.194 "data_size": 63488 00:23:03.194 }, 00:23:03.194 { 00:23:03.194 "name": "BaseBdev2", 00:23:03.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.194 "is_configured": false, 00:23:03.194 "data_offset": 0, 00:23:03.194 "data_size": 0 00:23:03.194 }, 00:23:03.194 { 00:23:03.194 "name": "BaseBdev3", 00:23:03.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.194 "is_configured": false, 00:23:03.194 "data_offset": 0, 00:23:03.194 "data_size": 0 00:23:03.195 } 00:23:03.195 ] 00:23:03.195 }' 00:23:03.195 10:48:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.195 10:48:29 -- common/autotest_common.sh@10 -- # set +x 00:23:04.131 10:48:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:04.131 [2024-07-24 10:48:30.761312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:04.131 BaseBdev2 00:23:04.131 10:48:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:04.132 10:48:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:04.132 10:48:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:04.132 10:48:30 -- common/autotest_common.sh@889 -- # local i 00:23:04.132 10:48:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:04.132 10:48:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:04.132 10:48:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:04.390 10:48:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:04.648 [ 00:23:04.648 { 00:23:04.648 "name": "BaseBdev2", 00:23:04.648 "aliases": [ 00:23:04.648 "7568e14c-aa7b-4895-9687-57760d4e36a9" 00:23:04.648 ], 00:23:04.648 "product_name": "Malloc disk", 00:23:04.648 "block_size": 512, 00:23:04.648 "num_blocks": 65536, 00:23:04.648 "uuid": "7568e14c-aa7b-4895-9687-57760d4e36a9", 00:23:04.648 "assigned_rate_limits": { 00:23:04.648 "rw_ios_per_sec": 0, 00:23:04.648 "rw_mbytes_per_sec": 0, 00:23:04.648 "r_mbytes_per_sec": 0, 00:23:04.648 "w_mbytes_per_sec": 0 00:23:04.648 }, 00:23:04.648 "claimed": true, 00:23:04.648 "claim_type": "exclusive_write", 00:23:04.648 "zoned": false, 00:23:04.648 "supported_io_types": { 00:23:04.648 "read": true, 00:23:04.648 "write": true, 00:23:04.648 "unmap": true, 00:23:04.648 "write_zeroes": true, 00:23:04.648 "flush": true, 00:23:04.648 "reset": true, 00:23:04.648 "compare": false, 00:23:04.648 "compare_and_write": false, 00:23:04.648 "abort": true, 00:23:04.648 "nvme_admin": false, 00:23:04.648 "nvme_io": false 00:23:04.648 }, 00:23:04.648 "memory_domains": [ 00:23:04.648 { 00:23:04.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.648 "dma_device_type": 2 00:23:04.648 } 00:23:04.648 ], 00:23:04.648 "driver_specific": {} 00:23:04.648 } 00:23:04.648 ] 00:23:04.648 10:48:31 -- common/autotest_common.sh@895 -- # return 0 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.648 10:48:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.906 10:48:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.906 "name": "Existed_Raid", 00:23:04.906 "uuid": "f7525cd8-110c-4231-994f-efb01b220d12", 00:23:04.906 "strip_size_kb": 64, 00:23:04.906 "state": "configuring", 00:23:04.906 "raid_level": "raid5f", 00:23:04.906 "superblock": true, 00:23:04.906 "num_base_bdevs": 3, 00:23:04.906 "num_base_bdevs_discovered": 2, 00:23:04.906 "num_base_bdevs_operational": 3, 00:23:04.906 "base_bdevs_list": [ 00:23:04.906 { 00:23:04.906 "name": "BaseBdev1", 00:23:04.906 "uuid": "f0c50201-e3b1-4384-a143-aa8dfe6d705b", 00:23:04.906 "is_configured": true, 00:23:04.906 "data_offset": 2048, 00:23:04.906 "data_size": 63488 00:23:04.906 }, 00:23:04.906 { 00:23:04.906 "name": "BaseBdev2", 00:23:04.906 "uuid": "7568e14c-aa7b-4895-9687-57760d4e36a9", 00:23:04.906 "is_configured": true, 00:23:04.906 "data_offset": 2048, 00:23:04.906 "data_size": 63488 00:23:04.906 }, 00:23:04.906 { 00:23:04.906 "name": "BaseBdev3", 00:23:04.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.906 "is_configured": false, 00:23:04.906 "data_offset": 0, 00:23:04.906 "data_size": 0 00:23:04.906 } 00:23:04.906 ] 00:23:04.906 }' 00:23:04.906 10:48:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.906 10:48:31 -- common/autotest_common.sh@10 -- # set +x 00:23:05.841 10:48:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:06.115 [2024-07-24 10:48:32.591551] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:06.115 [2024-07-24 10:48:32.592285] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:23:06.115 [2024-07-24 10:48:32.592475] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:06.115 [2024-07-24 10:48:32.592771] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:23:06.115 BaseBdev3 00:23:06.115 [2024-07-24 10:48:32.593822] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:23:06.115 [2024-07-24 10:48:32.593841] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:23:06.115 [2024-07-24 10:48:32.594020] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.115 10:48:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:06.115 10:48:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:06.115 10:48:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:06.115 10:48:32 -- common/autotest_common.sh@889 -- # local i 00:23:06.115 10:48:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:06.115 10:48:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:06.115 10:48:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:06.373 10:48:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:06.631 [ 00:23:06.631 { 00:23:06.631 "name": "BaseBdev3", 00:23:06.631 "aliases": [ 00:23:06.631 "772a9ad4-2aaa-431b-9c71-eae76b69b6bd" 00:23:06.631 ], 00:23:06.631 "product_name": "Malloc disk", 00:23:06.631 "block_size": 512, 00:23:06.631 "num_blocks": 65536, 00:23:06.631 "uuid": "772a9ad4-2aaa-431b-9c71-eae76b69b6bd", 00:23:06.631 "assigned_rate_limits": { 00:23:06.631 "rw_ios_per_sec": 0, 00:23:06.631 "rw_mbytes_per_sec": 0, 00:23:06.631 "r_mbytes_per_sec": 0, 00:23:06.631 "w_mbytes_per_sec": 0 00:23:06.631 }, 00:23:06.631 "claimed": true, 00:23:06.631 "claim_type": "exclusive_write", 00:23:06.631 "zoned": false, 00:23:06.631 "supported_io_types": { 00:23:06.631 "read": true, 00:23:06.631 "write": true, 00:23:06.631 "unmap": true, 00:23:06.631 "write_zeroes": true, 00:23:06.631 "flush": true, 00:23:06.631 "reset": true, 00:23:06.631 "compare": false, 00:23:06.631 "compare_and_write": false, 00:23:06.631 "abort": true, 00:23:06.631 "nvme_admin": false, 00:23:06.631 "nvme_io": false 00:23:06.631 }, 00:23:06.631 "memory_domains": [ 00:23:06.631 { 00:23:06.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:06.631 "dma_device_type": 2 00:23:06.631 } 00:23:06.631 ], 00:23:06.631 "driver_specific": {} 00:23:06.631 } 00:23:06.631 ] 00:23:06.631 10:48:33 -- common/autotest_common.sh@895 -- # return 0 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.631 10:48:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.888 10:48:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:06.888 "name": "Existed_Raid", 00:23:06.888 "uuid": "f7525cd8-110c-4231-994f-efb01b220d12", 00:23:06.888 "strip_size_kb": 64, 00:23:06.888 "state": "online", 00:23:06.888 "raid_level": "raid5f", 00:23:06.888 "superblock": true, 00:23:06.888 "num_base_bdevs": 3, 00:23:06.888 "num_base_bdevs_discovered": 3, 00:23:06.888 "num_base_bdevs_operational": 3, 00:23:06.888 "base_bdevs_list": [ 00:23:06.888 { 00:23:06.888 "name": "BaseBdev1", 00:23:06.888 "uuid": "f0c50201-e3b1-4384-a143-aa8dfe6d705b", 00:23:06.888 "is_configured": true, 00:23:06.888 "data_offset": 2048, 00:23:06.888 "data_size": 63488 00:23:06.888 }, 00:23:06.888 { 00:23:06.888 "name": "BaseBdev2", 00:23:06.888 "uuid": "7568e14c-aa7b-4895-9687-57760d4e36a9", 00:23:06.888 "is_configured": true, 00:23:06.888 "data_offset": 2048, 00:23:06.888 "data_size": 63488 00:23:06.888 }, 00:23:06.888 { 00:23:06.888 "name": "BaseBdev3", 00:23:06.888 "uuid": "772a9ad4-2aaa-431b-9c71-eae76b69b6bd", 00:23:06.888 "is_configured": true, 00:23:06.888 "data_offset": 2048, 00:23:06.888 "data_size": 63488 00:23:06.888 } 00:23:06.888 ] 00:23:06.888 }' 00:23:06.888 10:48:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:06.888 10:48:33 -- common/autotest_common.sh@10 -- # set +x 00:23:07.453 10:48:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:07.710 [2024-07-24 10:48:34.353025] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.710 10:48:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.968 10:48:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.968 "name": "Existed_Raid", 00:23:07.968 "uuid": "f7525cd8-110c-4231-994f-efb01b220d12", 00:23:07.968 "strip_size_kb": 64, 00:23:07.968 "state": "online", 00:23:07.968 "raid_level": "raid5f", 00:23:07.968 "superblock": true, 00:23:07.968 "num_base_bdevs": 3, 00:23:07.968 "num_base_bdevs_discovered": 2, 00:23:07.968 "num_base_bdevs_operational": 2, 00:23:07.968 "base_bdevs_list": [ 00:23:07.968 { 00:23:07.968 "name": null, 00:23:07.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.968 "is_configured": false, 00:23:07.968 "data_offset": 2048, 00:23:07.968 "data_size": 63488 00:23:07.968 }, 00:23:07.968 { 00:23:07.968 "name": "BaseBdev2", 00:23:07.968 "uuid": "7568e14c-aa7b-4895-9687-57760d4e36a9", 00:23:07.968 "is_configured": true, 00:23:07.968 "data_offset": 2048, 00:23:07.968 "data_size": 63488 00:23:07.968 }, 00:23:07.968 { 00:23:07.968 "name": "BaseBdev3", 00:23:07.968 "uuid": "772a9ad4-2aaa-431b-9c71-eae76b69b6bd", 00:23:07.968 "is_configured": true, 00:23:07.968 "data_offset": 2048, 00:23:07.968 "data_size": 63488 00:23:07.968 } 00:23:07.968 ] 00:23:07.968 }' 00:23:07.968 10:48:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.968 10:48:34 -- common/autotest_common.sh@10 -- # set +x 00:23:08.901 10:48:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:08.901 10:48:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:08.901 10:48:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.901 10:48:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:09.159 10:48:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:09.159 10:48:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:09.159 10:48:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:09.417 [2024-07-24 10:48:35.914235] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:09.417 [2024-07-24 10:48:35.914612] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:09.417 [2024-07-24 10:48:35.914892] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:09.417 10:48:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:09.417 10:48:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:09.417 10:48:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.417 10:48:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:09.675 10:48:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:09.675 10:48:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:09.675 10:48:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:09.933 [2024-07-24 10:48:36.428048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:09.933 [2024-07-24 10:48:36.428502] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:23:09.933 10:48:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:09.933 10:48:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:09.933 10:48:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.933 10:48:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:10.191 10:48:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:10.191 10:48:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:10.191 10:48:36 -- bdev/bdev_raid.sh@287 -- # killprocess 138347 00:23:10.191 10:48:36 -- common/autotest_common.sh@926 -- # '[' -z 138347 ']' 00:23:10.191 10:48:36 -- common/autotest_common.sh@930 -- # kill -0 138347 00:23:10.191 10:48:36 -- common/autotest_common.sh@931 -- # uname 00:23:10.191 10:48:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:10.191 10:48:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138347 00:23:10.191 10:48:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:10.191 10:48:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:10.191 10:48:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138347' 00:23:10.191 killing process with pid 138347 00:23:10.191 10:48:36 -- common/autotest_common.sh@945 -- # kill 138347 00:23:10.191 10:48:36 -- common/autotest_common.sh@950 -- # wait 138347 00:23:10.191 [2024-07-24 10:48:36.777419] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:10.191 [2024-07-24 10:48:36.777597] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:10.450 00:23:10.450 real 0m13.590s 00:23:10.450 user 0m24.749s 00:23:10.450 sys 0m1.977s 00:23:10.450 10:48:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:10.450 10:48:37 -- common/autotest_common.sh@10 -- # set +x 00:23:10.450 ************************************ 00:23:10.450 END TEST raid5f_state_function_test_sb 00:23:10.450 ************************************ 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:10.450 10:48:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:10.450 10:48:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:10.450 10:48:37 -- common/autotest_common.sh@10 -- # set +x 00:23:10.450 ************************************ 00:23:10.450 START TEST raid5f_superblock_test 00:23:10.450 ************************************ 00:23:10.450 10:48:37 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@357 -- # raid_pid=138740 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@358 -- # waitforlisten 138740 /var/tmp/spdk-raid.sock 00:23:10.450 10:48:37 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:10.450 10:48:37 -- common/autotest_common.sh@819 -- # '[' -z 138740 ']' 00:23:10.450 10:48:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:10.450 10:48:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:10.450 10:48:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:10.450 10:48:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:10.450 10:48:37 -- common/autotest_common.sh@10 -- # set +x 00:23:10.708 [2024-07-24 10:48:37.164794] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:23:10.708 [2024-07-24 10:48:37.165307] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138740 ] 00:23:10.708 [2024-07-24 10:48:37.312747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.966 [2024-07-24 10:48:37.440820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:10.966 [2024-07-24 10:48:37.518901] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:11.546 10:48:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:11.546 10:48:38 -- common/autotest_common.sh@852 -- # return 0 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:11.546 10:48:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:11.803 malloc1 00:23:11.803 10:48:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:12.061 [2024-07-24 10:48:38.622857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:12.061 [2024-07-24 10:48:38.623230] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.061 [2024-07-24 10:48:38.623410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:12.061 [2024-07-24 10:48:38.623626] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.061 [2024-07-24 10:48:38.626758] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.061 [2024-07-24 10:48:38.626937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:12.061 pt1 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:12.061 10:48:38 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:12.320 malloc2 00:23:12.320 10:48:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:12.578 [2024-07-24 10:48:39.127412] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:12.578 [2024-07-24 10:48:39.127974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.578 [2024-07-24 10:48:39.128088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:12.578 [2024-07-24 10:48:39.128454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.578 [2024-07-24 10:48:39.131479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.578 [2024-07-24 10:48:39.131716] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:12.578 pt2 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:12.578 10:48:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:12.837 malloc3 00:23:12.837 10:48:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:13.095 [2024-07-24 10:48:39.663394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:13.095 [2024-07-24 10:48:39.663837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.095 [2024-07-24 10:48:39.664047] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:13.095 [2024-07-24 10:48:39.664211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.095 [2024-07-24 10:48:39.667225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.095 [2024-07-24 10:48:39.667413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:13.095 pt3 00:23:13.095 10:48:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:13.095 10:48:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:13.095 10:48:39 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:13.354 [2024-07-24 10:48:39.908079] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:13.354 [2024-07-24 10:48:39.910789] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:13.354 [2024-07-24 10:48:39.910995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:13.354 [2024-07-24 10:48:39.911329] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:13.354 [2024-07-24 10:48:39.911465] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:13.354 [2024-07-24 10:48:39.911810] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:23:13.354 [2024-07-24 10:48:39.912883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:13.354 [2024-07-24 10:48:39.913020] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:23:13.354 [2024-07-24 10:48:39.913358] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.354 10:48:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.612 10:48:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.612 "name": "raid_bdev1", 00:23:13.612 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:13.612 "strip_size_kb": 64, 00:23:13.612 "state": "online", 00:23:13.612 "raid_level": "raid5f", 00:23:13.612 "superblock": true, 00:23:13.612 "num_base_bdevs": 3, 00:23:13.612 "num_base_bdevs_discovered": 3, 00:23:13.612 "num_base_bdevs_operational": 3, 00:23:13.612 "base_bdevs_list": [ 00:23:13.612 { 00:23:13.612 "name": "pt1", 00:23:13.612 "uuid": "01967d10-3b76-593c-af48-aa97d2d405d0", 00:23:13.612 "is_configured": true, 00:23:13.612 "data_offset": 2048, 00:23:13.612 "data_size": 63488 00:23:13.612 }, 00:23:13.612 { 00:23:13.612 "name": "pt2", 00:23:13.612 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:13.612 "is_configured": true, 00:23:13.612 "data_offset": 2048, 00:23:13.612 "data_size": 63488 00:23:13.612 }, 00:23:13.612 { 00:23:13.612 "name": "pt3", 00:23:13.612 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:13.612 "is_configured": true, 00:23:13.612 "data_offset": 2048, 00:23:13.612 "data_size": 63488 00:23:13.612 } 00:23:13.612 ] 00:23:13.612 }' 00:23:13.612 10:48:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.612 10:48:40 -- common/autotest_common.sh@10 -- # set +x 00:23:14.547 10:48:40 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:14.547 10:48:40 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:14.547 [2024-07-24 10:48:41.117985] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.547 10:48:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=674dcd39-5518-4e08-8094-62ee67cada8e 00:23:14.547 10:48:41 -- bdev/bdev_raid.sh@380 -- # '[' -z 674dcd39-5518-4e08-8094-62ee67cada8e ']' 00:23:14.547 10:48:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:14.805 [2024-07-24 10:48:41.385824] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:14.805 [2024-07-24 10:48:41.386177] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.805 [2024-07-24 10:48:41.386429] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.805 [2024-07-24 10:48:41.386670] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.805 [2024-07-24 10:48:41.386811] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:23:14.805 10:48:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.805 10:48:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:15.063 10:48:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:15.063 10:48:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:15.063 10:48:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:15.063 10:48:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:15.322 10:48:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:15.322 10:48:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:15.580 10:48:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:15.580 10:48:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:15.837 10:48:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:15.837 10:48:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:16.096 10:48:42 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:16.096 10:48:42 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:16.096 10:48:42 -- common/autotest_common.sh@640 -- # local es=0 00:23:16.096 10:48:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:16.096 10:48:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:16.096 10:48:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:16.096 10:48:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:16.096 10:48:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:16.096 10:48:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:16.096 10:48:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:16.096 10:48:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:16.096 10:48:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:16.096 10:48:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:16.355 [2024-07-24 10:48:42.954160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:16.355 [2024-07-24 10:48:42.956802] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:16.355 [2024-07-24 10:48:42.957002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:16.355 [2024-07-24 10:48:42.957116] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:16.355 [2024-07-24 10:48:42.957435] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:16.355 [2024-07-24 10:48:42.957618] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:16.355 [2024-07-24 10:48:42.957814] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:16.355 [2024-07-24 10:48:42.957942] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state configuring 00:23:16.355 request: 00:23:16.355 { 00:23:16.355 "name": "raid_bdev1", 00:23:16.355 "raid_level": "raid5f", 00:23:16.355 "base_bdevs": [ 00:23:16.355 "malloc1", 00:23:16.355 "malloc2", 00:23:16.355 "malloc3" 00:23:16.355 ], 00:23:16.355 "superblock": false, 00:23:16.355 "strip_size_kb": 64, 00:23:16.355 "method": "bdev_raid_create", 00:23:16.355 "req_id": 1 00:23:16.355 } 00:23:16.355 Got JSON-RPC error response 00:23:16.355 response: 00:23:16.355 { 00:23:16.355 "code": -17, 00:23:16.355 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:16.355 } 00:23:16.355 10:48:42 -- common/autotest_common.sh@643 -- # es=1 00:23:16.355 10:48:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:16.355 10:48:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:16.355 10:48:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:16.355 10:48:42 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.355 10:48:42 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:16.613 10:48:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:16.613 10:48:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:16.613 10:48:43 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:16.909 [2024-07-24 10:48:43.490596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:16.909 [2024-07-24 10:48:43.491160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.909 [2024-07-24 10:48:43.491440] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:16.909 [2024-07-24 10:48:43.491763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.909 [2024-07-24 10:48:43.495610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.909 [2024-07-24 10:48:43.495825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:16.909 [2024-07-24 10:48:43.496170] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:16.909 [2024-07-24 10:48:43.496450] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:16.909 pt1 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.909 10:48:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.167 10:48:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:17.167 "name": "raid_bdev1", 00:23:17.167 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:17.167 "strip_size_kb": 64, 00:23:17.167 "state": "configuring", 00:23:17.167 "raid_level": "raid5f", 00:23:17.167 "superblock": true, 00:23:17.167 "num_base_bdevs": 3, 00:23:17.167 "num_base_bdevs_discovered": 1, 00:23:17.167 "num_base_bdevs_operational": 3, 00:23:17.167 "base_bdevs_list": [ 00:23:17.167 { 00:23:17.167 "name": "pt1", 00:23:17.167 "uuid": "01967d10-3b76-593c-af48-aa97d2d405d0", 00:23:17.167 "is_configured": true, 00:23:17.167 "data_offset": 2048, 00:23:17.167 "data_size": 63488 00:23:17.167 }, 00:23:17.167 { 00:23:17.167 "name": null, 00:23:17.167 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:17.167 "is_configured": false, 00:23:17.167 "data_offset": 2048, 00:23:17.167 "data_size": 63488 00:23:17.167 }, 00:23:17.167 { 00:23:17.167 "name": null, 00:23:17.167 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:17.167 "is_configured": false, 00:23:17.167 "data_offset": 2048, 00:23:17.167 "data_size": 63488 00:23:17.167 } 00:23:17.167 ] 00:23:17.167 }' 00:23:17.167 10:48:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:17.167 10:48:43 -- common/autotest_common.sh@10 -- # set +x 00:23:17.733 10:48:44 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:17.733 10:48:44 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.991 [2024-07-24 10:48:44.644632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.991 [2024-07-24 10:48:44.645026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.991 [2024-07-24 10:48:44.645221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:17.991 [2024-07-24 10:48:44.645393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.991 [2024-07-24 10:48:44.646008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.991 [2024-07-24 10:48:44.646186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.991 [2024-07-24 10:48:44.646427] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:17.991 [2024-07-24 10:48:44.646574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:17.991 pt2 00:23:17.991 10:48:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:18.249 [2024-07-24 10:48:44.924705] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.507 10:48:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.765 10:48:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.765 "name": "raid_bdev1", 00:23:18.765 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:18.765 "strip_size_kb": 64, 00:23:18.765 "state": "configuring", 00:23:18.765 "raid_level": "raid5f", 00:23:18.765 "superblock": true, 00:23:18.765 "num_base_bdevs": 3, 00:23:18.765 "num_base_bdevs_discovered": 1, 00:23:18.765 "num_base_bdevs_operational": 3, 00:23:18.765 "base_bdevs_list": [ 00:23:18.765 { 00:23:18.765 "name": "pt1", 00:23:18.765 "uuid": "01967d10-3b76-593c-af48-aa97d2d405d0", 00:23:18.765 "is_configured": true, 00:23:18.765 "data_offset": 2048, 00:23:18.765 "data_size": 63488 00:23:18.765 }, 00:23:18.765 { 00:23:18.765 "name": null, 00:23:18.765 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:18.765 "is_configured": false, 00:23:18.765 "data_offset": 2048, 00:23:18.765 "data_size": 63488 00:23:18.765 }, 00:23:18.765 { 00:23:18.765 "name": null, 00:23:18.765 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:18.765 "is_configured": false, 00:23:18.765 "data_offset": 2048, 00:23:18.765 "data_size": 63488 00:23:18.765 } 00:23:18.765 ] 00:23:18.765 }' 00:23:18.765 10:48:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.765 10:48:45 -- common/autotest_common.sh@10 -- # set +x 00:23:19.331 10:48:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:19.331 10:48:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:19.331 10:48:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:19.589 [2024-07-24 10:48:46.132903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:19.589 [2024-07-24 10:48:46.133241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.589 [2024-07-24 10:48:46.133410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:19.589 [2024-07-24 10:48:46.133555] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.589 [2024-07-24 10:48:46.134099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.589 [2024-07-24 10:48:46.134264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:19.589 [2024-07-24 10:48:46.134509] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:19.590 [2024-07-24 10:48:46.134656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:19.590 pt2 00:23:19.590 10:48:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:19.590 10:48:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:19.590 10:48:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:19.848 [2024-07-24 10:48:46.413006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:19.848 [2024-07-24 10:48:46.413354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.848 [2024-07-24 10:48:46.413528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:19.848 [2024-07-24 10:48:46.413669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.848 [2024-07-24 10:48:46.414219] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.848 [2024-07-24 10:48:46.414392] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:19.848 [2024-07-24 10:48:46.414628] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:19.848 [2024-07-24 10:48:46.414818] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:19.848 [2024-07-24 10:48:46.415127] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:19.848 [2024-07-24 10:48:46.415257] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:19.848 [2024-07-24 10:48:46.415380] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:23:19.848 [2024-07-24 10:48:46.416124] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:19.848 [2024-07-24 10:48:46.416260] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:19.848 [2024-07-24 10:48:46.416513] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.848 pt3 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.848 10:48:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.106 10:48:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.106 "name": "raid_bdev1", 00:23:20.106 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:20.106 "strip_size_kb": 64, 00:23:20.106 "state": "online", 00:23:20.106 "raid_level": "raid5f", 00:23:20.106 "superblock": true, 00:23:20.106 "num_base_bdevs": 3, 00:23:20.106 "num_base_bdevs_discovered": 3, 00:23:20.106 "num_base_bdevs_operational": 3, 00:23:20.106 "base_bdevs_list": [ 00:23:20.106 { 00:23:20.106 "name": "pt1", 00:23:20.106 "uuid": "01967d10-3b76-593c-af48-aa97d2d405d0", 00:23:20.106 "is_configured": true, 00:23:20.106 "data_offset": 2048, 00:23:20.106 "data_size": 63488 00:23:20.106 }, 00:23:20.107 { 00:23:20.107 "name": "pt2", 00:23:20.107 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:20.107 "is_configured": true, 00:23:20.107 "data_offset": 2048, 00:23:20.107 "data_size": 63488 00:23:20.107 }, 00:23:20.107 { 00:23:20.107 "name": "pt3", 00:23:20.107 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:20.107 "is_configured": true, 00:23:20.107 "data_offset": 2048, 00:23:20.107 "data_size": 63488 00:23:20.107 } 00:23:20.107 ] 00:23:20.107 }' 00:23:20.107 10:48:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.107 10:48:46 -- common/autotest_common.sh@10 -- # set +x 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:21.052 [2024-07-24 10:48:47.606775] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@430 -- # '[' 674dcd39-5518-4e08-8094-62ee67cada8e '!=' 674dcd39-5518-4e08-8094-62ee67cada8e ']' 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:21.052 10:48:47 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:21.327 [2024-07-24 10:48:47.846541] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.327 10:48:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.585 10:48:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.585 "name": "raid_bdev1", 00:23:21.585 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:21.585 "strip_size_kb": 64, 00:23:21.585 "state": "online", 00:23:21.585 "raid_level": "raid5f", 00:23:21.585 "superblock": true, 00:23:21.585 "num_base_bdevs": 3, 00:23:21.585 "num_base_bdevs_discovered": 2, 00:23:21.585 "num_base_bdevs_operational": 2, 00:23:21.585 "base_bdevs_list": [ 00:23:21.585 { 00:23:21.585 "name": null, 00:23:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.585 "is_configured": false, 00:23:21.585 "data_offset": 2048, 00:23:21.585 "data_size": 63488 00:23:21.585 }, 00:23:21.585 { 00:23:21.585 "name": "pt2", 00:23:21.585 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:21.585 "is_configured": true, 00:23:21.585 "data_offset": 2048, 00:23:21.585 "data_size": 63488 00:23:21.585 }, 00:23:21.585 { 00:23:21.585 "name": "pt3", 00:23:21.585 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:21.585 "is_configured": true, 00:23:21.585 "data_offset": 2048, 00:23:21.585 "data_size": 63488 00:23:21.585 } 00:23:21.585 ] 00:23:21.585 }' 00:23:21.585 10:48:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.585 10:48:48 -- common/autotest_common.sh@10 -- # set +x 00:23:22.151 10:48:48 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.409 [2024-07-24 10:48:48.946829] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.409 [2024-07-24 10:48:48.947168] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.409 [2024-07-24 10:48:48.947419] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.409 [2024-07-24 10:48:48.947705] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.409 [2024-07-24 10:48:48.947837] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:22.409 10:48:48 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.409 10:48:48 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:22.667 10:48:49 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:22.667 10:48:49 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:22.667 10:48:49 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:22.667 10:48:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:22.667 10:48:49 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:22.925 10:48:49 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:22.925 10:48:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:22.925 10:48:49 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:23.184 10:48:49 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:23.184 10:48:49 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:23.184 10:48:49 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:23.184 10:48:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:23.184 10:48:49 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:23.442 [2024-07-24 10:48:49.946974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:23.442 [2024-07-24 10:48:49.947397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.442 [2024-07-24 10:48:49.947603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:23.442 [2024-07-24 10:48:49.947744] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.442 [2024-07-24 10:48:49.950683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.442 [2024-07-24 10:48:49.950879] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:23.442 [2024-07-24 10:48:49.951194] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:23.442 [2024-07-24 10:48:49.951359] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:23.442 pt2 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.442 10:48:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.700 10:48:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.700 "name": "raid_bdev1", 00:23:23.700 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:23.700 "strip_size_kb": 64, 00:23:23.700 "state": "configuring", 00:23:23.700 "raid_level": "raid5f", 00:23:23.700 "superblock": true, 00:23:23.700 "num_base_bdevs": 3, 00:23:23.700 "num_base_bdevs_discovered": 1, 00:23:23.700 "num_base_bdevs_operational": 2, 00:23:23.700 "base_bdevs_list": [ 00:23:23.700 { 00:23:23.700 "name": null, 00:23:23.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.700 "is_configured": false, 00:23:23.700 "data_offset": 2048, 00:23:23.700 "data_size": 63488 00:23:23.700 }, 00:23:23.700 { 00:23:23.700 "name": "pt2", 00:23:23.700 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:23.700 "is_configured": true, 00:23:23.700 "data_offset": 2048, 00:23:23.700 "data_size": 63488 00:23:23.700 }, 00:23:23.700 { 00:23:23.700 "name": null, 00:23:23.700 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:23.700 "is_configured": false, 00:23:23.700 "data_offset": 2048, 00:23:23.700 "data_size": 63488 00:23:23.700 } 00:23:23.700 ] 00:23:23.700 }' 00:23:23.700 10:48:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.700 10:48:50 -- common/autotest_common.sh@10 -- # set +x 00:23:24.265 10:48:50 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:24.266 10:48:50 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:24.266 10:48:50 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:24.266 10:48:50 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:24.526 [2024-07-24 10:48:51.095694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:24.526 [2024-07-24 10:48:51.096113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.526 [2024-07-24 10:48:51.096214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:24.526 [2024-07-24 10:48:51.096508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.526 [2024-07-24 10:48:51.097246] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.526 [2024-07-24 10:48:51.097418] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:24.526 [2024-07-24 10:48:51.097684] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:24.526 [2024-07-24 10:48:51.097832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:24.526 [2024-07-24 10:48:51.098127] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:24.526 [2024-07-24 10:48:51.098256] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:24.526 [2024-07-24 10:48:51.098494] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:23:24.526 [2024-07-24 10:48:51.099489] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:24.526 [2024-07-24 10:48:51.099662] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:24.526 [2024-07-24 10:48:51.100135] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.526 pt3 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.526 10:48:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.791 10:48:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.791 "name": "raid_bdev1", 00:23:24.791 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:24.791 "strip_size_kb": 64, 00:23:24.791 "state": "online", 00:23:24.791 "raid_level": "raid5f", 00:23:24.791 "superblock": true, 00:23:24.791 "num_base_bdevs": 3, 00:23:24.791 "num_base_bdevs_discovered": 2, 00:23:24.791 "num_base_bdevs_operational": 2, 00:23:24.791 "base_bdevs_list": [ 00:23:24.791 { 00:23:24.791 "name": null, 00:23:24.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.791 "is_configured": false, 00:23:24.791 "data_offset": 2048, 00:23:24.791 "data_size": 63488 00:23:24.791 }, 00:23:24.791 { 00:23:24.791 "name": "pt2", 00:23:24.791 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:24.791 "is_configured": true, 00:23:24.791 "data_offset": 2048, 00:23:24.791 "data_size": 63488 00:23:24.791 }, 00:23:24.791 { 00:23:24.791 "name": "pt3", 00:23:24.791 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:24.791 "is_configured": true, 00:23:24.791 "data_offset": 2048, 00:23:24.791 "data_size": 63488 00:23:24.791 } 00:23:24.791 ] 00:23:24.791 }' 00:23:24.791 10:48:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.791 10:48:51 -- common/autotest_common.sh@10 -- # set +x 00:23:25.723 10:48:52 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:25.723 10:48:52 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:25.723 [2024-07-24 10:48:52.260279] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:25.723 [2024-07-24 10:48:52.260607] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:25.723 [2024-07-24 10:48:52.260883] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.724 [2024-07-24 10:48:52.261079] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:25.724 [2024-07-24 10:48:52.261193] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:25.724 10:48:52 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.724 10:48:52 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:25.982 10:48:52 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:25.982 10:48:52 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:25.982 10:48:52 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:26.241 [2024-07-24 10:48:52.784379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:26.241 [2024-07-24 10:48:52.784839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.241 [2024-07-24 10:48:52.785032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:26.241 [2024-07-24 10:48:52.785178] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.241 [2024-07-24 10:48:52.788131] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.241 [2024-07-24 10:48:52.788325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:26.241 [2024-07-24 10:48:52.788579] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:26.241 [2024-07-24 10:48:52.788747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:26.241 pt1 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.241 10:48:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.500 10:48:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.500 "name": "raid_bdev1", 00:23:26.500 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:26.500 "strip_size_kb": 64, 00:23:26.500 "state": "configuring", 00:23:26.500 "raid_level": "raid5f", 00:23:26.500 "superblock": true, 00:23:26.500 "num_base_bdevs": 3, 00:23:26.500 "num_base_bdevs_discovered": 1, 00:23:26.500 "num_base_bdevs_operational": 3, 00:23:26.500 "base_bdevs_list": [ 00:23:26.500 { 00:23:26.500 "name": "pt1", 00:23:26.500 "uuid": "01967d10-3b76-593c-af48-aa97d2d405d0", 00:23:26.500 "is_configured": true, 00:23:26.500 "data_offset": 2048, 00:23:26.500 "data_size": 63488 00:23:26.500 }, 00:23:26.500 { 00:23:26.500 "name": null, 00:23:26.500 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:26.500 "is_configured": false, 00:23:26.500 "data_offset": 2048, 00:23:26.500 "data_size": 63488 00:23:26.500 }, 00:23:26.500 { 00:23:26.500 "name": null, 00:23:26.500 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:26.500 "is_configured": false, 00:23:26.500 "data_offset": 2048, 00:23:26.500 "data_size": 63488 00:23:26.500 } 00:23:26.500 ] 00:23:26.500 }' 00:23:26.500 10:48:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.500 10:48:53 -- common/autotest_common.sh@10 -- # set +x 00:23:27.433 10:48:53 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:27.433 10:48:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:27.433 10:48:53 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:27.433 10:48:53 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:27.433 10:48:53 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:27.433 10:48:53 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:27.691 10:48:54 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:27.691 10:48:54 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:27.691 10:48:54 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:27.691 10:48:54 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:27.949 [2024-07-24 10:48:54.477112] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:27.949 [2024-07-24 10:48:54.477541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.949 [2024-07-24 10:48:54.477631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:27.949 [2024-07-24 10:48:54.477955] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.949 [2024-07-24 10:48:54.478657] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.949 [2024-07-24 10:48:54.478837] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:27.949 [2024-07-24 10:48:54.479095] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:27.949 [2024-07-24 10:48:54.479215] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:27.949 [2024-07-24 10:48:54.479318] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.949 [2024-07-24 10:48:54.479455] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:23:27.949 [2024-07-24 10:48:54.479700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:27.949 pt3 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.949 10:48:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.208 10:48:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.208 "name": "raid_bdev1", 00:23:28.208 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:28.208 "strip_size_kb": 64, 00:23:28.208 "state": "configuring", 00:23:28.208 "raid_level": "raid5f", 00:23:28.208 "superblock": true, 00:23:28.208 "num_base_bdevs": 3, 00:23:28.208 "num_base_bdevs_discovered": 1, 00:23:28.208 "num_base_bdevs_operational": 2, 00:23:28.208 "base_bdevs_list": [ 00:23:28.208 { 00:23:28.208 "name": null, 00:23:28.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.208 "is_configured": false, 00:23:28.208 "data_offset": 2048, 00:23:28.208 "data_size": 63488 00:23:28.208 }, 00:23:28.208 { 00:23:28.208 "name": null, 00:23:28.208 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:28.208 "is_configured": false, 00:23:28.208 "data_offset": 2048, 00:23:28.208 "data_size": 63488 00:23:28.208 }, 00:23:28.208 { 00:23:28.208 "name": "pt3", 00:23:28.208 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:28.208 "is_configured": true, 00:23:28.208 "data_offset": 2048, 00:23:28.208 "data_size": 63488 00:23:28.208 } 00:23:28.208 ] 00:23:28.208 }' 00:23:28.208 10:48:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.208 10:48:54 -- common/autotest_common.sh@10 -- # set +x 00:23:29.143 10:48:55 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:29.143 10:48:55 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:29.143 10:48:55 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:29.143 [2024-07-24 10:48:55.813420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:29.143 [2024-07-24 10:48:55.813835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.143 [2024-07-24 10:48:55.814008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:29.143 [2024-07-24 10:48:55.814150] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.143 [2024-07-24 10:48:55.814774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.143 [2024-07-24 10:48:55.814949] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:29.143 [2024-07-24 10:48:55.815175] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:29.143 [2024-07-24 10:48:55.815309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:29.143 [2024-07-24 10:48:55.815607] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:29.143 [2024-07-24 10:48:55.815734] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:29.143 [2024-07-24 10:48:55.815872] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:23:29.143 [2024-07-24 10:48:55.816705] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:29.143 [2024-07-24 10:48:55.816842] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:29.143 [2024-07-24 10:48:55.817140] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.143 pt2 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.401 10:48:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.659 10:48:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.659 "name": "raid_bdev1", 00:23:29.659 "uuid": "674dcd39-5518-4e08-8094-62ee67cada8e", 00:23:29.659 "strip_size_kb": 64, 00:23:29.659 "state": "online", 00:23:29.659 "raid_level": "raid5f", 00:23:29.659 "superblock": true, 00:23:29.659 "num_base_bdevs": 3, 00:23:29.659 "num_base_bdevs_discovered": 2, 00:23:29.659 "num_base_bdevs_operational": 2, 00:23:29.659 "base_bdevs_list": [ 00:23:29.659 { 00:23:29.659 "name": null, 00:23:29.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.659 "is_configured": false, 00:23:29.659 "data_offset": 2048, 00:23:29.659 "data_size": 63488 00:23:29.659 }, 00:23:29.659 { 00:23:29.659 "name": "pt2", 00:23:29.659 "uuid": "a1746681-aa8d-53e4-ad47-fb69cac3b3d5", 00:23:29.659 "is_configured": true, 00:23:29.659 "data_offset": 2048, 00:23:29.659 "data_size": 63488 00:23:29.659 }, 00:23:29.659 { 00:23:29.659 "name": "pt3", 00:23:29.659 "uuid": "2cc9b2cd-f607-5712-a3c4-66bc7ab4d153", 00:23:29.659 "is_configured": true, 00:23:29.659 "data_offset": 2048, 00:23:29.659 "data_size": 63488 00:23:29.659 } 00:23:29.659 ] 00:23:29.659 }' 00:23:29.659 10:48:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.659 10:48:56 -- common/autotest_common.sh@10 -- # set +x 00:23:30.225 10:48:56 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:30.225 10:48:56 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:30.483 [2024-07-24 10:48:57.121989] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:30.483 10:48:57 -- bdev/bdev_raid.sh@506 -- # '[' 674dcd39-5518-4e08-8094-62ee67cada8e '!=' 674dcd39-5518-4e08-8094-62ee67cada8e ']' 00:23:30.483 10:48:57 -- bdev/bdev_raid.sh@511 -- # killprocess 138740 00:23:30.483 10:48:57 -- common/autotest_common.sh@926 -- # '[' -z 138740 ']' 00:23:30.483 10:48:57 -- common/autotest_common.sh@930 -- # kill -0 138740 00:23:30.483 10:48:57 -- common/autotest_common.sh@931 -- # uname 00:23:30.483 10:48:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:30.483 10:48:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 138740 00:23:30.483 10:48:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:30.483 10:48:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:30.483 10:48:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 138740' 00:23:30.483 killing process with pid 138740 00:23:30.483 10:48:57 -- common/autotest_common.sh@945 -- # kill 138740 00:23:30.483 [2024-07-24 10:48:57.167527] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:30.483 10:48:57 -- common/autotest_common.sh@950 -- # wait 138740 00:23:30.483 [2024-07-24 10:48:57.167899] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.483 [2024-07-24 10:48:57.168108] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.483 [2024-07-24 10:48:57.168233] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:30.741 [2024-07-24 10:48:57.231041] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:30.999 00:23:30.999 real 0m20.469s 00:23:30.999 user 0m38.104s 00:23:30.999 sys 0m2.733s 00:23:30.999 10:48:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.999 10:48:57 -- common/autotest_common.sh@10 -- # set +x 00:23:30.999 ************************************ 00:23:30.999 END TEST raid5f_superblock_test 00:23:30.999 ************************************ 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:30.999 10:48:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:30.999 10:48:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:30.999 10:48:57 -- common/autotest_common.sh@10 -- # set +x 00:23:30.999 ************************************ 00:23:30.999 START TEST raid5f_rebuild_test 00:23:30.999 ************************************ 00:23:30.999 10:48:57 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@544 -- # raid_pid=139358 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139358 /var/tmp/spdk-raid.sock 00:23:30.999 10:48:57 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:30.999 10:48:57 -- common/autotest_common.sh@819 -- # '[' -z 139358 ']' 00:23:30.999 10:48:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:30.999 10:48:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:30.999 10:48:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:30.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:30.999 10:48:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:30.999 10:48:57 -- common/autotest_common.sh@10 -- # set +x 00:23:31.258 [2024-07-24 10:48:57.705613] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:23:31.258 [2024-07-24 10:48:57.706062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139358 ] 00:23:31.258 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:31.258 Zero copy mechanism will not be used. 00:23:31.258 [2024-07-24 10:48:57.849282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.516 [2024-07-24 10:48:57.976483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.516 [2024-07-24 10:48:58.052827] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:32.082 10:48:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:32.082 10:48:58 -- common/autotest_common.sh@852 -- # return 0 00:23:32.082 10:48:58 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:32.082 10:48:58 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:32.082 10:48:58 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:32.340 BaseBdev1 00:23:32.598 10:48:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:32.598 10:48:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:32.598 10:48:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:32.872 BaseBdev2 00:23:32.872 10:48:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:32.872 10:48:59 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:32.872 10:48:59 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:33.130 BaseBdev3 00:23:33.130 10:48:59 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:33.388 spare_malloc 00:23:33.388 10:48:59 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:33.646 spare_delay 00:23:33.646 10:49:00 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:33.904 [2024-07-24 10:49:00.387393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:33.904 [2024-07-24 10:49:00.387850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.904 [2024-07-24 10:49:00.388051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:33.904 [2024-07-24 10:49:00.388250] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.904 [2024-07-24 10:49:00.391259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.904 [2024-07-24 10:49:00.391456] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:33.904 spare 00:23:33.904 10:49:00 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:34.161 [2024-07-24 10:49:00.620120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:34.161 [2024-07-24 10:49:00.622937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:34.161 [2024-07-24 10:49:00.623198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:34.161 [2024-07-24 10:49:00.623521] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:34.162 [2024-07-24 10:49:00.623642] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:34.162 [2024-07-24 10:49:00.623897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:23:34.162 [2024-07-24 10:49:00.624926] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:34.162 [2024-07-24 10:49:00.625056] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007880 00:23:34.162 [2024-07-24 10:49:00.625482] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.162 10:49:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.420 10:49:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.420 "name": "raid_bdev1", 00:23:34.420 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:34.420 "strip_size_kb": 64, 00:23:34.420 "state": "online", 00:23:34.420 "raid_level": "raid5f", 00:23:34.420 "superblock": false, 00:23:34.420 "num_base_bdevs": 3, 00:23:34.420 "num_base_bdevs_discovered": 3, 00:23:34.420 "num_base_bdevs_operational": 3, 00:23:34.420 "base_bdevs_list": [ 00:23:34.420 { 00:23:34.420 "name": "BaseBdev1", 00:23:34.420 "uuid": "3a92de39-e930-4a4a-87ce-20077956bb8b", 00:23:34.420 "is_configured": true, 00:23:34.420 "data_offset": 0, 00:23:34.420 "data_size": 65536 00:23:34.420 }, 00:23:34.420 { 00:23:34.420 "name": "BaseBdev2", 00:23:34.420 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:34.420 "is_configured": true, 00:23:34.420 "data_offset": 0, 00:23:34.420 "data_size": 65536 00:23:34.420 }, 00:23:34.420 { 00:23:34.420 "name": "BaseBdev3", 00:23:34.420 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:34.420 "is_configured": true, 00:23:34.420 "data_offset": 0, 00:23:34.420 "data_size": 65536 00:23:34.420 } 00:23:34.420 ] 00:23:34.420 }' 00:23:34.420 10:49:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.420 10:49:00 -- common/autotest_common.sh@10 -- # set +x 00:23:34.986 10:49:01 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:34.986 10:49:01 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:35.244 [2024-07-24 10:49:01.917938] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.502 10:49:01 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:35.502 10:49:01 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:35.502 10:49:01 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.760 10:49:02 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:35.760 10:49:02 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:35.760 10:49:02 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:35.760 10:49:02 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@12 -- # local i 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.760 10:49:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:36.024 [2024-07-24 10:49:02.525967] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:23:36.024 /dev/nbd0 00:23:36.024 10:49:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:36.024 10:49:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:36.024 10:49:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:36.024 10:49:02 -- common/autotest_common.sh@857 -- # local i 00:23:36.024 10:49:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:36.024 10:49:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:36.024 10:49:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:36.024 10:49:02 -- common/autotest_common.sh@861 -- # break 00:23:36.024 10:49:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:36.024 10:49:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:36.024 10:49:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.024 1+0 records in 00:23:36.024 1+0 records out 00:23:36.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584914 s, 7.0 MB/s 00:23:36.024 10:49:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.024 10:49:02 -- common/autotest_common.sh@874 -- # size=4096 00:23:36.024 10:49:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.024 10:49:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:36.024 10:49:02 -- common/autotest_common.sh@877 -- # return 0 00:23:36.024 10:49:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.024 10:49:02 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:36.024 10:49:02 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:36.024 10:49:02 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:36.024 10:49:02 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:36.024 10:49:02 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:36.602 512+0 records in 00:23:36.602 512+0 records out 00:23:36.602 67108864 bytes (67 MB, 64 MiB) copied, 0.400932 s, 167 MB/s 00:23:36.602 10:49:02 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:36.602 10:49:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:36.602 10:49:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:36.602 10:49:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:36.602 10:49:03 -- bdev/nbd_common.sh@51 -- # local i 00:23:36.602 10:49:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:36.602 10:49:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:36.858 [2024-07-24 10:49:03.306042] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@41 -- # break 00:23:36.858 10:49:03 -- bdev/nbd_common.sh@45 -- # return 0 00:23:36.858 10:49:03 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:37.116 [2024-07-24 10:49:03.545591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.116 "name": "raid_bdev1", 00:23:37.116 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:37.116 "strip_size_kb": 64, 00:23:37.116 "state": "online", 00:23:37.116 "raid_level": "raid5f", 00:23:37.116 "superblock": false, 00:23:37.116 "num_base_bdevs": 3, 00:23:37.116 "num_base_bdevs_discovered": 2, 00:23:37.116 "num_base_bdevs_operational": 2, 00:23:37.116 "base_bdevs_list": [ 00:23:37.116 { 00:23:37.116 "name": null, 00:23:37.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.116 "is_configured": false, 00:23:37.116 "data_offset": 0, 00:23:37.116 "data_size": 65536 00:23:37.116 }, 00:23:37.116 { 00:23:37.116 "name": "BaseBdev2", 00:23:37.116 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:37.116 "is_configured": true, 00:23:37.116 "data_offset": 0, 00:23:37.116 "data_size": 65536 00:23:37.116 }, 00:23:37.116 { 00:23:37.116 "name": "BaseBdev3", 00:23:37.116 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:37.116 "is_configured": true, 00:23:37.116 "data_offset": 0, 00:23:37.116 "data_size": 65536 00:23:37.116 } 00:23:37.116 ] 00:23:37.116 }' 00:23:37.116 10:49:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.116 10:49:03 -- common/autotest_common.sh@10 -- # set +x 00:23:38.051 10:49:04 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:38.309 [2024-07-24 10:49:04.745829] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:38.309 [2024-07-24 10:49:04.745926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:38.309 [2024-07-24 10:49:04.751019] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027990 00:23:38.309 [2024-07-24 10:49:04.753747] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:38.309 10:49:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.243 10:49:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.501 10:49:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.501 "name": "raid_bdev1", 00:23:39.501 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:39.501 "strip_size_kb": 64, 00:23:39.501 "state": "online", 00:23:39.501 "raid_level": "raid5f", 00:23:39.501 "superblock": false, 00:23:39.501 "num_base_bdevs": 3, 00:23:39.501 "num_base_bdevs_discovered": 3, 00:23:39.501 "num_base_bdevs_operational": 3, 00:23:39.502 "process": { 00:23:39.502 "type": "rebuild", 00:23:39.502 "target": "spare", 00:23:39.502 "progress": { 00:23:39.502 "blocks": 24576, 00:23:39.502 "percent": 18 00:23:39.502 } 00:23:39.502 }, 00:23:39.502 "base_bdevs_list": [ 00:23:39.502 { 00:23:39.502 "name": "spare", 00:23:39.502 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:39.502 "is_configured": true, 00:23:39.502 "data_offset": 0, 00:23:39.502 "data_size": 65536 00:23:39.502 }, 00:23:39.502 { 00:23:39.502 "name": "BaseBdev2", 00:23:39.502 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:39.502 "is_configured": true, 00:23:39.502 "data_offset": 0, 00:23:39.502 "data_size": 65536 00:23:39.502 }, 00:23:39.502 { 00:23:39.502 "name": "BaseBdev3", 00:23:39.502 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:39.502 "is_configured": true, 00:23:39.502 "data_offset": 0, 00:23:39.502 "data_size": 65536 00:23:39.502 } 00:23:39.502 ] 00:23:39.502 }' 00:23:39.502 10:49:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.502 10:49:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.502 10:49:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.502 10:49:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.502 10:49:06 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:39.760 [2024-07-24 10:49:06.371982] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.018 [2024-07-24 10:49:06.472614] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:40.018 [2024-07-24 10:49:06.472754] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.018 10:49:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.276 10:49:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.276 "name": "raid_bdev1", 00:23:40.276 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:40.276 "strip_size_kb": 64, 00:23:40.276 "state": "online", 00:23:40.276 "raid_level": "raid5f", 00:23:40.276 "superblock": false, 00:23:40.276 "num_base_bdevs": 3, 00:23:40.276 "num_base_bdevs_discovered": 2, 00:23:40.276 "num_base_bdevs_operational": 2, 00:23:40.276 "base_bdevs_list": [ 00:23:40.276 { 00:23:40.276 "name": null, 00:23:40.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.276 "is_configured": false, 00:23:40.276 "data_offset": 0, 00:23:40.276 "data_size": 65536 00:23:40.276 }, 00:23:40.276 { 00:23:40.276 "name": "BaseBdev2", 00:23:40.276 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:40.276 "is_configured": true, 00:23:40.276 "data_offset": 0, 00:23:40.276 "data_size": 65536 00:23:40.276 }, 00:23:40.276 { 00:23:40.276 "name": "BaseBdev3", 00:23:40.276 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:40.276 "is_configured": true, 00:23:40.276 "data_offset": 0, 00:23:40.276 "data_size": 65536 00:23:40.276 } 00:23:40.276 ] 00:23:40.276 }' 00:23:40.276 10:49:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.276 10:49:06 -- common/autotest_common.sh@10 -- # set +x 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.841 10:49:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:41.099 10:49:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:41.099 "name": "raid_bdev1", 00:23:41.099 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:41.099 "strip_size_kb": 64, 00:23:41.099 "state": "online", 00:23:41.099 "raid_level": "raid5f", 00:23:41.099 "superblock": false, 00:23:41.099 "num_base_bdevs": 3, 00:23:41.099 "num_base_bdevs_discovered": 2, 00:23:41.099 "num_base_bdevs_operational": 2, 00:23:41.099 "base_bdevs_list": [ 00:23:41.099 { 00:23:41.100 "name": null, 00:23:41.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.100 "is_configured": false, 00:23:41.100 "data_offset": 0, 00:23:41.100 "data_size": 65536 00:23:41.100 }, 00:23:41.100 { 00:23:41.100 "name": "BaseBdev2", 00:23:41.100 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:41.100 "is_configured": true, 00:23:41.100 "data_offset": 0, 00:23:41.100 "data_size": 65536 00:23:41.100 }, 00:23:41.100 { 00:23:41.100 "name": "BaseBdev3", 00:23:41.100 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:41.100 "is_configured": true, 00:23:41.100 "data_offset": 0, 00:23:41.100 "data_size": 65536 00:23:41.100 } 00:23:41.100 ] 00:23:41.100 }' 00:23:41.100 10:49:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:41.100 10:49:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:41.100 10:49:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.358 10:49:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:41.358 10:49:07 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:41.616 [2024-07-24 10:49:08.092313] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:41.616 [2024-07-24 10:49:08.092440] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.616 [2024-07-24 10:49:08.100400] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027b30 00:23:41.616 [2024-07-24 10:49:08.104186] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:41.616 10:49:08 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.549 10:49:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.815 10:49:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:42.815 "name": "raid_bdev1", 00:23:42.815 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:42.815 "strip_size_kb": 64, 00:23:42.815 "state": "online", 00:23:42.815 "raid_level": "raid5f", 00:23:42.815 "superblock": false, 00:23:42.815 "num_base_bdevs": 3, 00:23:42.815 "num_base_bdevs_discovered": 3, 00:23:42.815 "num_base_bdevs_operational": 3, 00:23:42.815 "process": { 00:23:42.815 "type": "rebuild", 00:23:42.815 "target": "spare", 00:23:42.815 "progress": { 00:23:42.815 "blocks": 24576, 00:23:42.815 "percent": 18 00:23:42.815 } 00:23:42.815 }, 00:23:42.815 "base_bdevs_list": [ 00:23:42.815 { 00:23:42.815 "name": "spare", 00:23:42.815 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:42.815 "is_configured": true, 00:23:42.815 "data_offset": 0, 00:23:42.815 "data_size": 65536 00:23:42.815 }, 00:23:42.815 { 00:23:42.815 "name": "BaseBdev2", 00:23:42.815 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:42.815 "is_configured": true, 00:23:42.815 "data_offset": 0, 00:23:42.815 "data_size": 65536 00:23:42.815 }, 00:23:42.815 { 00:23:42.815 "name": "BaseBdev3", 00:23:42.815 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:42.815 "is_configured": true, 00:23:42.815 "data_offset": 0, 00:23:42.815 "data_size": 65536 00:23:42.815 } 00:23:42.815 ] 00:23:42.815 }' 00:23:42.815 10:49:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.815 10:49:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.815 10:49:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@657 -- # local timeout=629 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.081 10:49:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.339 10:49:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.339 "name": "raid_bdev1", 00:23:43.339 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:43.339 "strip_size_kb": 64, 00:23:43.339 "state": "online", 00:23:43.339 "raid_level": "raid5f", 00:23:43.339 "superblock": false, 00:23:43.339 "num_base_bdevs": 3, 00:23:43.339 "num_base_bdevs_discovered": 3, 00:23:43.339 "num_base_bdevs_operational": 3, 00:23:43.339 "process": { 00:23:43.339 "type": "rebuild", 00:23:43.339 "target": "spare", 00:23:43.339 "progress": { 00:23:43.339 "blocks": 34816, 00:23:43.339 "percent": 26 00:23:43.339 } 00:23:43.339 }, 00:23:43.339 "base_bdevs_list": [ 00:23:43.339 { 00:23:43.339 "name": "spare", 00:23:43.339 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:43.339 "is_configured": true, 00:23:43.339 "data_offset": 0, 00:23:43.339 "data_size": 65536 00:23:43.339 }, 00:23:43.339 { 00:23:43.339 "name": "BaseBdev2", 00:23:43.339 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:43.339 "is_configured": true, 00:23:43.339 "data_offset": 0, 00:23:43.339 "data_size": 65536 00:23:43.339 }, 00:23:43.339 { 00:23:43.339 "name": "BaseBdev3", 00:23:43.339 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:43.339 "is_configured": true, 00:23:43.339 "data_offset": 0, 00:23:43.339 "data_size": 65536 00:23:43.339 } 00:23:43.339 ] 00:23:43.339 }' 00:23:43.339 10:49:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.339 10:49:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:43.339 10:49:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.339 10:49:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:43.339 10:49:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.714 10:49:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.714 10:49:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.714 "name": "raid_bdev1", 00:23:44.714 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:44.714 "strip_size_kb": 64, 00:23:44.714 "state": "online", 00:23:44.714 "raid_level": "raid5f", 00:23:44.714 "superblock": false, 00:23:44.714 "num_base_bdevs": 3, 00:23:44.714 "num_base_bdevs_discovered": 3, 00:23:44.714 "num_base_bdevs_operational": 3, 00:23:44.714 "process": { 00:23:44.714 "type": "rebuild", 00:23:44.714 "target": "spare", 00:23:44.714 "progress": { 00:23:44.714 "blocks": 61440, 00:23:44.714 "percent": 46 00:23:44.714 } 00:23:44.714 }, 00:23:44.714 "base_bdevs_list": [ 00:23:44.714 { 00:23:44.714 "name": "spare", 00:23:44.714 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:44.714 "is_configured": true, 00:23:44.714 "data_offset": 0, 00:23:44.714 "data_size": 65536 00:23:44.714 }, 00:23:44.714 { 00:23:44.714 "name": "BaseBdev2", 00:23:44.714 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:44.714 "is_configured": true, 00:23:44.714 "data_offset": 0, 00:23:44.714 "data_size": 65536 00:23:44.714 }, 00:23:44.714 { 00:23:44.714 "name": "BaseBdev3", 00:23:44.714 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:44.714 "is_configured": true, 00:23:44.714 "data_offset": 0, 00:23:44.714 "data_size": 65536 00:23:44.714 } 00:23:44.714 ] 00:23:44.714 }' 00:23:44.714 10:49:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.714 10:49:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.714 10:49:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.714 10:49:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.714 10:49:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:46.094 "name": "raid_bdev1", 00:23:46.094 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:46.094 "strip_size_kb": 64, 00:23:46.094 "state": "online", 00:23:46.094 "raid_level": "raid5f", 00:23:46.094 "superblock": false, 00:23:46.094 "num_base_bdevs": 3, 00:23:46.094 "num_base_bdevs_discovered": 3, 00:23:46.094 "num_base_bdevs_operational": 3, 00:23:46.094 "process": { 00:23:46.094 "type": "rebuild", 00:23:46.094 "target": "spare", 00:23:46.094 "progress": { 00:23:46.094 "blocks": 90112, 00:23:46.094 "percent": 68 00:23:46.094 } 00:23:46.094 }, 00:23:46.094 "base_bdevs_list": [ 00:23:46.094 { 00:23:46.094 "name": "spare", 00:23:46.094 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:46.094 "is_configured": true, 00:23:46.094 "data_offset": 0, 00:23:46.094 "data_size": 65536 00:23:46.094 }, 00:23:46.094 { 00:23:46.094 "name": "BaseBdev2", 00:23:46.094 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:46.094 "is_configured": true, 00:23:46.094 "data_offset": 0, 00:23:46.094 "data_size": 65536 00:23:46.094 }, 00:23:46.094 { 00:23:46.094 "name": "BaseBdev3", 00:23:46.094 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:46.094 "is_configured": true, 00:23:46.094 "data_offset": 0, 00:23:46.094 "data_size": 65536 00:23:46.094 } 00:23:46.094 ] 00:23:46.094 }' 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.094 10:49:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.466 10:49:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.466 10:49:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.466 "name": "raid_bdev1", 00:23:47.466 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:47.466 "strip_size_kb": 64, 00:23:47.466 "state": "online", 00:23:47.466 "raid_level": "raid5f", 00:23:47.466 "superblock": false, 00:23:47.466 "num_base_bdevs": 3, 00:23:47.466 "num_base_bdevs_discovered": 3, 00:23:47.466 "num_base_bdevs_operational": 3, 00:23:47.466 "process": { 00:23:47.466 "type": "rebuild", 00:23:47.466 "target": "spare", 00:23:47.466 "progress": { 00:23:47.466 "blocks": 118784, 00:23:47.466 "percent": 90 00:23:47.466 } 00:23:47.466 }, 00:23:47.466 "base_bdevs_list": [ 00:23:47.466 { 00:23:47.466 "name": "spare", 00:23:47.466 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:47.466 "is_configured": true, 00:23:47.466 "data_offset": 0, 00:23:47.466 "data_size": 65536 00:23:47.466 }, 00:23:47.466 { 00:23:47.466 "name": "BaseBdev2", 00:23:47.466 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:47.466 "is_configured": true, 00:23:47.466 "data_offset": 0, 00:23:47.466 "data_size": 65536 00:23:47.466 }, 00:23:47.466 { 00:23:47.466 "name": "BaseBdev3", 00:23:47.466 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:47.466 "is_configured": true, 00:23:47.466 "data_offset": 0, 00:23:47.467 "data_size": 65536 00:23:47.467 } 00:23:47.467 ] 00:23:47.467 }' 00:23:47.467 10:49:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.467 10:49:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.467 10:49:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.725 10:49:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.725 10:49:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:47.982 [2024-07-24 10:49:14.582911] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:47.982 [2024-07-24 10:49:14.583045] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:47.982 [2024-07-24 10:49:14.583169] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.550 10:49:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.809 10:49:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:48.809 "name": "raid_bdev1", 00:23:48.809 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:48.809 "strip_size_kb": 64, 00:23:48.809 "state": "online", 00:23:48.809 "raid_level": "raid5f", 00:23:48.809 "superblock": false, 00:23:48.809 "num_base_bdevs": 3, 00:23:48.809 "num_base_bdevs_discovered": 3, 00:23:48.809 "num_base_bdevs_operational": 3, 00:23:48.809 "base_bdevs_list": [ 00:23:48.809 { 00:23:48.809 "name": "spare", 00:23:48.809 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:48.809 "is_configured": true, 00:23:48.809 "data_offset": 0, 00:23:48.809 "data_size": 65536 00:23:48.809 }, 00:23:48.809 { 00:23:48.809 "name": "BaseBdev2", 00:23:48.809 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:48.809 "is_configured": true, 00:23:48.809 "data_offset": 0, 00:23:48.809 "data_size": 65536 00:23:48.809 }, 00:23:48.809 { 00:23:48.809 "name": "BaseBdev3", 00:23:48.809 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:48.809 "is_configured": true, 00:23:48.809 "data_offset": 0, 00:23:48.809 "data_size": 65536 00:23:48.809 } 00:23:48.809 ] 00:23:48.809 }' 00:23:48.809 10:49:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@660 -- # break 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.100 10:49:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.359 "name": "raid_bdev1", 00:23:49.359 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:49.359 "strip_size_kb": 64, 00:23:49.359 "state": "online", 00:23:49.359 "raid_level": "raid5f", 00:23:49.359 "superblock": false, 00:23:49.359 "num_base_bdevs": 3, 00:23:49.359 "num_base_bdevs_discovered": 3, 00:23:49.359 "num_base_bdevs_operational": 3, 00:23:49.359 "base_bdevs_list": [ 00:23:49.359 { 00:23:49.359 "name": "spare", 00:23:49.359 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:49.359 "is_configured": true, 00:23:49.359 "data_offset": 0, 00:23:49.359 "data_size": 65536 00:23:49.359 }, 00:23:49.359 { 00:23:49.359 "name": "BaseBdev2", 00:23:49.359 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:49.359 "is_configured": true, 00:23:49.359 "data_offset": 0, 00:23:49.359 "data_size": 65536 00:23:49.359 }, 00:23:49.359 { 00:23:49.359 "name": "BaseBdev3", 00:23:49.359 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:49.359 "is_configured": true, 00:23:49.359 "data_offset": 0, 00:23:49.359 "data_size": 65536 00:23:49.359 } 00:23:49.359 ] 00:23:49.359 }' 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.359 10:49:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.617 10:49:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.617 "name": "raid_bdev1", 00:23:49.617 "uuid": "18159312-b540-4219-89cc-be817fd9e8a2", 00:23:49.617 "strip_size_kb": 64, 00:23:49.617 "state": "online", 00:23:49.617 "raid_level": "raid5f", 00:23:49.617 "superblock": false, 00:23:49.617 "num_base_bdevs": 3, 00:23:49.617 "num_base_bdevs_discovered": 3, 00:23:49.617 "num_base_bdevs_operational": 3, 00:23:49.617 "base_bdevs_list": [ 00:23:49.617 { 00:23:49.617 "name": "spare", 00:23:49.617 "uuid": "5adc65b2-da3c-5c7e-89ed-d0a9918882c5", 00:23:49.617 "is_configured": true, 00:23:49.617 "data_offset": 0, 00:23:49.617 "data_size": 65536 00:23:49.617 }, 00:23:49.617 { 00:23:49.617 "name": "BaseBdev2", 00:23:49.617 "uuid": "048a06ed-a7b6-457d-a44a-5042ce882f97", 00:23:49.617 "is_configured": true, 00:23:49.617 "data_offset": 0, 00:23:49.617 "data_size": 65536 00:23:49.617 }, 00:23:49.617 { 00:23:49.617 "name": "BaseBdev3", 00:23:49.617 "uuid": "65618521-c45d-46e4-baf9-9b375da875f9", 00:23:49.617 "is_configured": true, 00:23:49.617 "data_offset": 0, 00:23:49.617 "data_size": 65536 00:23:49.617 } 00:23:49.617 ] 00:23:49.617 }' 00:23:49.617 10:49:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.617 10:49:16 -- common/autotest_common.sh@10 -- # set +x 00:23:50.636 10:49:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:50.636 [2024-07-24 10:49:17.288026] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:50.636 [2024-07-24 10:49:17.288086] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:50.636 [2024-07-24 10:49:17.288238] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:50.636 [2024-07-24 10:49:17.288368] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:50.636 [2024-07-24 10:49:17.288381] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name raid_bdev1, state offline 00:23:50.636 10:49:17 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.636 10:49:17 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:50.895 10:49:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:50.895 10:49:17 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:50.895 10:49:17 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@12 -- # local i 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:50.895 10:49:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:51.154 /dev/nbd0 00:23:51.154 10:49:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.411 10:49:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.411 10:49:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:51.411 10:49:17 -- common/autotest_common.sh@857 -- # local i 00:23:51.411 10:49:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:51.411 10:49:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:51.411 10:49:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:51.411 10:49:17 -- common/autotest_common.sh@861 -- # break 00:23:51.411 10:49:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:51.411 10:49:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:51.412 10:49:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.412 1+0 records in 00:23:51.412 1+0 records out 00:23:51.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000635881 s, 6.4 MB/s 00:23:51.412 10:49:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.412 10:49:17 -- common/autotest_common.sh@874 -- # size=4096 00:23:51.412 10:49:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.412 10:49:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:51.412 10:49:17 -- common/autotest_common.sh@877 -- # return 0 00:23:51.412 10:49:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.412 10:49:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.412 10:49:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:51.670 /dev/nbd1 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:51.670 10:49:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:51.670 10:49:18 -- common/autotest_common.sh@857 -- # local i 00:23:51.670 10:49:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:51.670 10:49:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:51.670 10:49:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:51.670 10:49:18 -- common/autotest_common.sh@861 -- # break 00:23:51.670 10:49:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:51.670 10:49:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:51.670 10:49:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.670 1+0 records in 00:23:51.670 1+0 records out 00:23:51.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848043 s, 4.8 MB/s 00:23:51.670 10:49:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.670 10:49:18 -- common/autotest_common.sh@874 -- # size=4096 00:23:51.670 10:49:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.670 10:49:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:51.670 10:49:18 -- common/autotest_common.sh@877 -- # return 0 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:51.670 10:49:18 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:51.670 10:49:18 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@51 -- # local i 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:51.670 10:49:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@41 -- # break 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@45 -- # return 0 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:51.928 10:49:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@41 -- # break 00:23:52.495 10:49:18 -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.495 10:49:18 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:52.495 10:49:18 -- bdev/bdev_raid.sh@709 -- # killprocess 139358 00:23:52.495 10:49:18 -- common/autotest_common.sh@926 -- # '[' -z 139358 ']' 00:23:52.495 10:49:18 -- common/autotest_common.sh@930 -- # kill -0 139358 00:23:52.495 10:49:18 -- common/autotest_common.sh@931 -- # uname 00:23:52.495 10:49:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:52.495 10:49:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139358 00:23:52.495 10:49:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:52.495 10:49:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:52.495 10:49:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139358' 00:23:52.495 killing process with pid 139358 00:23:52.495 10:49:18 -- common/autotest_common.sh@945 -- # kill 139358 00:23:52.495 Received shutdown signal, test time was about 60.000000 seconds 00:23:52.495 00:23:52.495 Latency(us) 00:23:52.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.495 =================================================================================================================== 00:23:52.495 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:52.495 10:49:18 -- common/autotest_common.sh@950 -- # wait 139358 00:23:52.495 [2024-07-24 10:49:18.931084] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:52.495 [2024-07-24 10:49:19.006860] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:52.754 10:49:19 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:52.754 00:23:52.754 real 0m21.759s 00:23:52.754 user 0m33.839s 00:23:52.754 sys 0m2.837s 00:23:52.754 10:49:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.754 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:23:52.754 ************************************ 00:23:52.754 END TEST raid5f_rebuild_test 00:23:52.754 ************************************ 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:23:53.013 10:49:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:53.013 10:49:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:53.013 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:23:53.013 ************************************ 00:23:53.013 START TEST raid5f_rebuild_test_sb 00:23:53.013 ************************************ 00:23:53.013 10:49:19 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@544 -- # raid_pid=139912 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@545 -- # waitforlisten 139912 /var/tmp/spdk-raid.sock 00:23:53.013 10:49:19 -- common/autotest_common.sh@819 -- # '[' -z 139912 ']' 00:23:53.013 10:49:19 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:53.013 10:49:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:53.013 10:49:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:53.013 10:49:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:53.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:53.013 10:49:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:53.013 10:49:19 -- common/autotest_common.sh@10 -- # set +x 00:23:53.013 [2024-07-24 10:49:19.553413] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:23:53.013 [2024-07-24 10:49:19.554091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139912 ] 00:23:53.013 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:53.013 Zero copy mechanism will not be used. 00:23:53.271 [2024-07-24 10:49:19.711963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.271 [2024-07-24 10:49:19.851428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.271 [2024-07-24 10:49:19.949024] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:54.204 10:49:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:54.204 10:49:20 -- common/autotest_common.sh@852 -- # return 0 00:23:54.204 10:49:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:54.204 10:49:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:54.204 10:49:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:54.462 BaseBdev1_malloc 00:23:54.462 10:49:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:54.720 [2024-07-24 10:49:21.184287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:54.720 [2024-07-24 10:49:21.184712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.720 [2024-07-24 10:49:21.184893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:23:54.720 [2024-07-24 10:49:21.185093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.720 [2024-07-24 10:49:21.188321] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.720 [2024-07-24 10:49:21.188520] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:54.720 BaseBdev1 00:23:54.720 10:49:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:54.720 10:49:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:54.720 10:49:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:54.978 BaseBdev2_malloc 00:23:54.978 10:49:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:55.237 [2024-07-24 10:49:21.721435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:55.237 [2024-07-24 10:49:21.721857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.237 [2024-07-24 10:49:21.721957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:23:55.237 [2024-07-24 10:49:21.722332] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.237 [2024-07-24 10:49:21.725380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.237 [2024-07-24 10:49:21.725567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:55.237 BaseBdev2 00:23:55.237 10:49:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:55.237 10:49:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:55.237 10:49:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:55.495 BaseBdev3_malloc 00:23:55.495 10:49:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:55.753 [2024-07-24 10:49:22.301514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:55.753 [2024-07-24 10:49:22.301971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.753 [2024-07-24 10:49:22.302071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:55.753 [2024-07-24 10:49:22.302425] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.753 [2024-07-24 10:49:22.305494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.753 [2024-07-24 10:49:22.305686] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:55.753 BaseBdev3 00:23:55.753 10:49:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:56.021 spare_malloc 00:23:56.021 10:49:22 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:56.292 spare_delay 00:23:56.292 10:49:22 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:56.550 [2024-07-24 10:49:23.106610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:56.550 [2024-07-24 10:49:23.107173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.550 [2024-07-24 10:49:23.107370] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:56.550 [2024-07-24 10:49:23.107622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.550 [2024-07-24 10:49:23.111148] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.550 [2024-07-24 10:49:23.111342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:56.550 spare 00:23:56.550 10:49:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:56.808 [2024-07-24 10:49:23.428020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.808 [2024-07-24 10:49:23.430970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:56.808 [2024-07-24 10:49:23.431188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.808 [2024-07-24 10:49:23.431630] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:56.808 [2024-07-24 10:49:23.431765] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:56.808 [2024-07-24 10:49:23.432022] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:23:56.808 [2024-07-24 10:49:23.433007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:56.808 [2024-07-24 10:49:23.433160] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:56.808 [2024-07-24 10:49:23.433540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.808 10:49:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.375 10:49:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.375 "name": "raid_bdev1", 00:23:57.375 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:23:57.375 "strip_size_kb": 64, 00:23:57.375 "state": "online", 00:23:57.375 "raid_level": "raid5f", 00:23:57.375 "superblock": true, 00:23:57.375 "num_base_bdevs": 3, 00:23:57.375 "num_base_bdevs_discovered": 3, 00:23:57.375 "num_base_bdevs_operational": 3, 00:23:57.375 "base_bdevs_list": [ 00:23:57.375 { 00:23:57.375 "name": "BaseBdev1", 00:23:57.375 "uuid": "ebafe730-bf10-5761-a46b-ec7d843bac41", 00:23:57.375 "is_configured": true, 00:23:57.375 "data_offset": 2048, 00:23:57.375 "data_size": 63488 00:23:57.375 }, 00:23:57.375 { 00:23:57.375 "name": "BaseBdev2", 00:23:57.375 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:23:57.375 "is_configured": true, 00:23:57.375 "data_offset": 2048, 00:23:57.375 "data_size": 63488 00:23:57.375 }, 00:23:57.375 { 00:23:57.375 "name": "BaseBdev3", 00:23:57.375 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:23:57.375 "is_configured": true, 00:23:57.375 "data_offset": 2048, 00:23:57.375 "data_size": 63488 00:23:57.375 } 00:23:57.375 ] 00:23:57.375 }' 00:23:57.375 10:49:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.375 10:49:23 -- common/autotest_common.sh@10 -- # set +x 00:23:57.941 10:49:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:57.941 10:49:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:58.198 [2024-07-24 10:49:24.700824] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.198 10:49:24 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:23:58.198 10:49:24 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.198 10:49:24 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:58.462 10:49:24 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:58.462 10:49:24 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:58.462 10:49:24 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:58.462 10:49:24 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@12 -- # local i 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.462 10:49:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:58.744 [2024-07-24 10:49:25.272830] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:23:58.744 /dev/nbd0 00:23:58.744 10:49:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:58.744 10:49:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:58.744 10:49:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:58.744 10:49:25 -- common/autotest_common.sh@857 -- # local i 00:23:58.744 10:49:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:58.744 10:49:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:58.744 10:49:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:58.744 10:49:25 -- common/autotest_common.sh@861 -- # break 00:23:58.744 10:49:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:58.744 10:49:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:58.744 10:49:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:58.744 1+0 records in 00:23:58.744 1+0 records out 00:23:58.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608184 s, 6.7 MB/s 00:23:58.744 10:49:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:58.744 10:49:25 -- common/autotest_common.sh@874 -- # size=4096 00:23:58.744 10:49:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:58.744 10:49:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:58.744 10:49:25 -- common/autotest_common.sh@877 -- # return 0 00:23:58.744 10:49:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:58.744 10:49:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:58.744 10:49:25 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:58.744 10:49:25 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:58.744 10:49:25 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:58.744 10:49:25 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:59.317 496+0 records in 00:23:59.317 496+0 records out 00:23:59.317 65011712 bytes (65 MB, 62 MiB) copied, 0.376093 s, 173 MB/s 00:23:59.317 10:49:25 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:59.317 10:49:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:59.317 10:49:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:59.317 10:49:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:59.317 10:49:25 -- bdev/nbd_common.sh@51 -- # local i 00:23:59.317 10:49:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:59.317 10:49:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:59.575 [2024-07-24 10:49:26.026769] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@41 -- # break 00:23:59.575 10:49:26 -- bdev/nbd_common.sh@45 -- # return 0 00:23:59.575 10:49:26 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:59.575 [2024-07-24 10:49:26.254379] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.834 10:49:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.092 10:49:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:00.092 "name": "raid_bdev1", 00:24:00.092 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:00.092 "strip_size_kb": 64, 00:24:00.092 "state": "online", 00:24:00.092 "raid_level": "raid5f", 00:24:00.092 "superblock": true, 00:24:00.092 "num_base_bdevs": 3, 00:24:00.092 "num_base_bdevs_discovered": 2, 00:24:00.092 "num_base_bdevs_operational": 2, 00:24:00.092 "base_bdevs_list": [ 00:24:00.092 { 00:24:00.092 "name": null, 00:24:00.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.092 "is_configured": false, 00:24:00.092 "data_offset": 2048, 00:24:00.092 "data_size": 63488 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "name": "BaseBdev2", 00:24:00.092 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:00.092 "is_configured": true, 00:24:00.092 "data_offset": 2048, 00:24:00.092 "data_size": 63488 00:24:00.092 }, 00:24:00.092 { 00:24:00.092 "name": "BaseBdev3", 00:24:00.092 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:00.092 "is_configured": true, 00:24:00.092 "data_offset": 2048, 00:24:00.092 "data_size": 63488 00:24:00.092 } 00:24:00.092 ] 00:24:00.092 }' 00:24:00.092 10:49:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:00.092 10:49:26 -- common/autotest_common.sh@10 -- # set +x 00:24:00.658 10:49:27 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:00.917 [2024-07-24 10:49:27.578783] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:00.917 [2024-07-24 10:49:27.579242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:00.917 [2024-07-24 10:49:27.586949] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000025500 00:24:00.917 [2024-07-24 10:49:27.590399] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:01.175 10:49:27 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.109 10:49:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.408 10:49:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:02.408 "name": "raid_bdev1", 00:24:02.408 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:02.408 "strip_size_kb": 64, 00:24:02.408 "state": "online", 00:24:02.408 "raid_level": "raid5f", 00:24:02.408 "superblock": true, 00:24:02.408 "num_base_bdevs": 3, 00:24:02.408 "num_base_bdevs_discovered": 3, 00:24:02.408 "num_base_bdevs_operational": 3, 00:24:02.408 "process": { 00:24:02.408 "type": "rebuild", 00:24:02.408 "target": "spare", 00:24:02.408 "progress": { 00:24:02.408 "blocks": 24576, 00:24:02.408 "percent": 19 00:24:02.408 } 00:24:02.408 }, 00:24:02.408 "base_bdevs_list": [ 00:24:02.408 { 00:24:02.408 "name": "spare", 00:24:02.408 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:02.408 "is_configured": true, 00:24:02.408 "data_offset": 2048, 00:24:02.408 "data_size": 63488 00:24:02.408 }, 00:24:02.408 { 00:24:02.408 "name": "BaseBdev2", 00:24:02.408 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:02.408 "is_configured": true, 00:24:02.408 "data_offset": 2048, 00:24:02.408 "data_size": 63488 00:24:02.408 }, 00:24:02.408 { 00:24:02.408 "name": "BaseBdev3", 00:24:02.408 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:02.408 "is_configured": true, 00:24:02.408 "data_offset": 2048, 00:24:02.408 "data_size": 63488 00:24:02.408 } 00:24:02.408 ] 00:24:02.408 }' 00:24:02.408 10:49:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.408 10:49:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.408 10:49:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.408 10:49:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.408 10:49:29 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:02.666 [2024-07-24 10:49:29.281050] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:02.666 [2024-07-24 10:49:29.313437] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:02.666 [2024-07-24 10:49:29.313907] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.666 10:49:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.233 10:49:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.233 "name": "raid_bdev1", 00:24:03.233 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:03.233 "strip_size_kb": 64, 00:24:03.233 "state": "online", 00:24:03.233 "raid_level": "raid5f", 00:24:03.233 "superblock": true, 00:24:03.233 "num_base_bdevs": 3, 00:24:03.233 "num_base_bdevs_discovered": 2, 00:24:03.233 "num_base_bdevs_operational": 2, 00:24:03.233 "base_bdevs_list": [ 00:24:03.233 { 00:24:03.233 "name": null, 00:24:03.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.233 "is_configured": false, 00:24:03.233 "data_offset": 2048, 00:24:03.233 "data_size": 63488 00:24:03.233 }, 00:24:03.233 { 00:24:03.233 "name": "BaseBdev2", 00:24:03.233 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:03.233 "is_configured": true, 00:24:03.233 "data_offset": 2048, 00:24:03.233 "data_size": 63488 00:24:03.233 }, 00:24:03.233 { 00:24:03.233 "name": "BaseBdev3", 00:24:03.233 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:03.233 "is_configured": true, 00:24:03.233 "data_offset": 2048, 00:24:03.233 "data_size": 63488 00:24:03.233 } 00:24:03.233 ] 00:24:03.233 }' 00:24:03.233 10:49:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.233 10:49:29 -- common/autotest_common.sh@10 -- # set +x 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.799 10:49:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.057 10:49:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.057 "name": "raid_bdev1", 00:24:04.057 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:04.057 "strip_size_kb": 64, 00:24:04.057 "state": "online", 00:24:04.057 "raid_level": "raid5f", 00:24:04.057 "superblock": true, 00:24:04.057 "num_base_bdevs": 3, 00:24:04.057 "num_base_bdevs_discovered": 2, 00:24:04.057 "num_base_bdevs_operational": 2, 00:24:04.057 "base_bdevs_list": [ 00:24:04.057 { 00:24:04.057 "name": null, 00:24:04.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.057 "is_configured": false, 00:24:04.057 "data_offset": 2048, 00:24:04.057 "data_size": 63488 00:24:04.057 }, 00:24:04.057 { 00:24:04.057 "name": "BaseBdev2", 00:24:04.057 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:04.057 "is_configured": true, 00:24:04.057 "data_offset": 2048, 00:24:04.057 "data_size": 63488 00:24:04.057 }, 00:24:04.057 { 00:24:04.057 "name": "BaseBdev3", 00:24:04.057 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:04.057 "is_configured": true, 00:24:04.057 "data_offset": 2048, 00:24:04.057 "data_size": 63488 00:24:04.057 } 00:24:04.057 ] 00:24:04.057 }' 00:24:04.057 10:49:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.057 10:49:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:04.057 10:49:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.057 10:49:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:04.057 10:49:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.315 [2024-07-24 10:49:30.939220] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:04.315 [2024-07-24 10:49:30.939601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.316 [2024-07-24 10:49:30.944652] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:24:04.316 [2024-07-24 10:49:30.947627] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.316 10:49:30 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.689 10:49:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.689 "name": "raid_bdev1", 00:24:05.689 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:05.689 "strip_size_kb": 64, 00:24:05.689 "state": "online", 00:24:05.689 "raid_level": "raid5f", 00:24:05.689 "superblock": true, 00:24:05.689 "num_base_bdevs": 3, 00:24:05.689 "num_base_bdevs_discovered": 3, 00:24:05.689 "num_base_bdevs_operational": 3, 00:24:05.689 "process": { 00:24:05.689 "type": "rebuild", 00:24:05.689 "target": "spare", 00:24:05.689 "progress": { 00:24:05.689 "blocks": 24576, 00:24:05.689 "percent": 19 00:24:05.689 } 00:24:05.689 }, 00:24:05.689 "base_bdevs_list": [ 00:24:05.689 { 00:24:05.689 "name": "spare", 00:24:05.689 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:05.689 "is_configured": true, 00:24:05.689 "data_offset": 2048, 00:24:05.689 "data_size": 63488 00:24:05.689 }, 00:24:05.689 { 00:24:05.689 "name": "BaseBdev2", 00:24:05.689 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:05.689 "is_configured": true, 00:24:05.689 "data_offset": 2048, 00:24:05.689 "data_size": 63488 00:24:05.689 }, 00:24:05.689 { 00:24:05.689 "name": "BaseBdev3", 00:24:05.689 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:05.689 "is_configured": true, 00:24:05.689 "data_offset": 2048, 00:24:05.689 "data_size": 63488 00:24:05.689 } 00:24:05.689 ] 00:24:05.689 }' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:05.689 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@657 -- # local timeout=652 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.689 10:49:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.955 10:49:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.955 "name": "raid_bdev1", 00:24:05.955 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:05.955 "strip_size_kb": 64, 00:24:05.955 "state": "online", 00:24:05.955 "raid_level": "raid5f", 00:24:05.955 "superblock": true, 00:24:05.955 "num_base_bdevs": 3, 00:24:05.955 "num_base_bdevs_discovered": 3, 00:24:05.955 "num_base_bdevs_operational": 3, 00:24:05.955 "process": { 00:24:05.955 "type": "rebuild", 00:24:05.955 "target": "spare", 00:24:05.955 "progress": { 00:24:05.955 "blocks": 32768, 00:24:05.955 "percent": 25 00:24:05.955 } 00:24:05.955 }, 00:24:05.955 "base_bdevs_list": [ 00:24:05.955 { 00:24:05.955 "name": "spare", 00:24:05.955 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:05.955 "is_configured": true, 00:24:05.955 "data_offset": 2048, 00:24:05.955 "data_size": 63488 00:24:05.955 }, 00:24:05.955 { 00:24:05.955 "name": "BaseBdev2", 00:24:05.955 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:05.955 "is_configured": true, 00:24:05.955 "data_offset": 2048, 00:24:05.955 "data_size": 63488 00:24:05.955 }, 00:24:05.955 { 00:24:05.955 "name": "BaseBdev3", 00:24:05.955 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:05.955 "is_configured": true, 00:24:05.955 "data_offset": 2048, 00:24:05.955 "data_size": 63488 00:24:05.955 } 00:24:05.955 ] 00:24:05.955 }' 00:24:05.955 10:49:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:06.213 10:49:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.213 10:49:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.213 10:49:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.213 10:49:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:07.148 10:49:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:07.148 10:49:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.149 10:49:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:07.149 10:49:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:07.149 10:49:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:07.149 10:49:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:07.149 10:49:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.149 10:49:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.408 10:49:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.408 "name": "raid_bdev1", 00:24:07.408 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:07.408 "strip_size_kb": 64, 00:24:07.408 "state": "online", 00:24:07.408 "raid_level": "raid5f", 00:24:07.408 "superblock": true, 00:24:07.408 "num_base_bdevs": 3, 00:24:07.408 "num_base_bdevs_discovered": 3, 00:24:07.408 "num_base_bdevs_operational": 3, 00:24:07.408 "process": { 00:24:07.408 "type": "rebuild", 00:24:07.408 "target": "spare", 00:24:07.408 "progress": { 00:24:07.408 "blocks": 61440, 00:24:07.408 "percent": 48 00:24:07.408 } 00:24:07.408 }, 00:24:07.408 "base_bdevs_list": [ 00:24:07.408 { 00:24:07.408 "name": "spare", 00:24:07.408 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:07.408 "is_configured": true, 00:24:07.408 "data_offset": 2048, 00:24:07.408 "data_size": 63488 00:24:07.408 }, 00:24:07.408 { 00:24:07.408 "name": "BaseBdev2", 00:24:07.408 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:07.408 "is_configured": true, 00:24:07.408 "data_offset": 2048, 00:24:07.408 "data_size": 63488 00:24:07.408 }, 00:24:07.408 { 00:24:07.408 "name": "BaseBdev3", 00:24:07.408 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:07.408 "is_configured": true, 00:24:07.408 "data_offset": 2048, 00:24:07.408 "data_size": 63488 00:24:07.408 } 00:24:07.408 ] 00:24:07.408 }' 00:24:07.408 10:49:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.408 10:49:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.408 10:49:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.666 10:49:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.666 10:49:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.599 10:49:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.858 10:49:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.858 "name": "raid_bdev1", 00:24:08.858 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:08.858 "strip_size_kb": 64, 00:24:08.858 "state": "online", 00:24:08.858 "raid_level": "raid5f", 00:24:08.858 "superblock": true, 00:24:08.858 "num_base_bdevs": 3, 00:24:08.858 "num_base_bdevs_discovered": 3, 00:24:08.858 "num_base_bdevs_operational": 3, 00:24:08.858 "process": { 00:24:08.858 "type": "rebuild", 00:24:08.858 "target": "spare", 00:24:08.858 "progress": { 00:24:08.858 "blocks": 90112, 00:24:08.858 "percent": 70 00:24:08.858 } 00:24:08.858 }, 00:24:08.858 "base_bdevs_list": [ 00:24:08.858 { 00:24:08.858 "name": "spare", 00:24:08.858 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:08.858 "is_configured": true, 00:24:08.858 "data_offset": 2048, 00:24:08.858 "data_size": 63488 00:24:08.858 }, 00:24:08.858 { 00:24:08.858 "name": "BaseBdev2", 00:24:08.858 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:08.858 "is_configured": true, 00:24:08.858 "data_offset": 2048, 00:24:08.858 "data_size": 63488 00:24:08.858 }, 00:24:08.858 { 00:24:08.858 "name": "BaseBdev3", 00:24:08.858 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:08.858 "is_configured": true, 00:24:08.858 "data_offset": 2048, 00:24:08.858 "data_size": 63488 00:24:08.858 } 00:24:08.858 ] 00:24:08.858 }' 00:24:08.858 10:49:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.858 10:49:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.858 10:49:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.116 10:49:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.116 10:49:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:10.084 10:49:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:10.084 10:49:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.084 10:49:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:10.084 10:49:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:10.084 10:49:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:10.085 10:49:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:10.085 10:49:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.085 10:49:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.342 10:49:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.342 "name": "raid_bdev1", 00:24:10.342 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:10.342 "strip_size_kb": 64, 00:24:10.342 "state": "online", 00:24:10.342 "raid_level": "raid5f", 00:24:10.342 "superblock": true, 00:24:10.342 "num_base_bdevs": 3, 00:24:10.342 "num_base_bdevs_discovered": 3, 00:24:10.342 "num_base_bdevs_operational": 3, 00:24:10.342 "process": { 00:24:10.342 "type": "rebuild", 00:24:10.342 "target": "spare", 00:24:10.342 "progress": { 00:24:10.342 "blocks": 116736, 00:24:10.342 "percent": 91 00:24:10.342 } 00:24:10.342 }, 00:24:10.342 "base_bdevs_list": [ 00:24:10.342 { 00:24:10.342 "name": "spare", 00:24:10.342 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:10.342 "is_configured": true, 00:24:10.342 "data_offset": 2048, 00:24:10.342 "data_size": 63488 00:24:10.342 }, 00:24:10.342 { 00:24:10.342 "name": "BaseBdev2", 00:24:10.342 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:10.342 "is_configured": true, 00:24:10.342 "data_offset": 2048, 00:24:10.342 "data_size": 63488 00:24:10.342 }, 00:24:10.342 { 00:24:10.342 "name": "BaseBdev3", 00:24:10.342 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:10.342 "is_configured": true, 00:24:10.342 "data_offset": 2048, 00:24:10.342 "data_size": 63488 00:24:10.342 } 00:24:10.342 ] 00:24:10.342 }' 00:24:10.342 10:49:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.342 10:49:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.343 10:49:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.343 10:49:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.343 10:49:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:10.600 [2024-07-24 10:49:37.225341] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:10.600 [2024-07-24 10:49:37.225893] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:10.600 [2024-07-24 10:49:37.226307] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.534 10:49:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.792 10:49:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.792 "name": "raid_bdev1", 00:24:11.792 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:11.792 "strip_size_kb": 64, 00:24:11.792 "state": "online", 00:24:11.792 "raid_level": "raid5f", 00:24:11.792 "superblock": true, 00:24:11.792 "num_base_bdevs": 3, 00:24:11.792 "num_base_bdevs_discovered": 3, 00:24:11.792 "num_base_bdevs_operational": 3, 00:24:11.792 "base_bdevs_list": [ 00:24:11.792 { 00:24:11.792 "name": "spare", 00:24:11.792 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:11.792 "is_configured": true, 00:24:11.792 "data_offset": 2048, 00:24:11.792 "data_size": 63488 00:24:11.792 }, 00:24:11.792 { 00:24:11.793 "name": "BaseBdev2", 00:24:11.793 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:11.793 "is_configured": true, 00:24:11.793 "data_offset": 2048, 00:24:11.793 "data_size": 63488 00:24:11.793 }, 00:24:11.793 { 00:24:11.793 "name": "BaseBdev3", 00:24:11.793 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:11.793 "is_configured": true, 00:24:11.793 "data_offset": 2048, 00:24:11.793 "data_size": 63488 00:24:11.793 } 00:24:11.793 ] 00:24:11.793 }' 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@660 -- # break 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.793 10:49:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.051 10:49:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:12.051 "name": "raid_bdev1", 00:24:12.051 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:12.051 "strip_size_kb": 64, 00:24:12.051 "state": "online", 00:24:12.051 "raid_level": "raid5f", 00:24:12.051 "superblock": true, 00:24:12.051 "num_base_bdevs": 3, 00:24:12.051 "num_base_bdevs_discovered": 3, 00:24:12.051 "num_base_bdevs_operational": 3, 00:24:12.051 "base_bdevs_list": [ 00:24:12.051 { 00:24:12.051 "name": "spare", 00:24:12.051 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:12.051 "is_configured": true, 00:24:12.051 "data_offset": 2048, 00:24:12.051 "data_size": 63488 00:24:12.051 }, 00:24:12.051 { 00:24:12.051 "name": "BaseBdev2", 00:24:12.051 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:12.051 "is_configured": true, 00:24:12.051 "data_offset": 2048, 00:24:12.051 "data_size": 63488 00:24:12.051 }, 00:24:12.051 { 00:24:12.051 "name": "BaseBdev3", 00:24:12.051 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:12.051 "is_configured": true, 00:24:12.051 "data_offset": 2048, 00:24:12.051 "data_size": 63488 00:24:12.051 } 00:24:12.051 ] 00:24:12.051 }' 00:24:12.051 10:49:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:12.051 10:49:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:12.051 10:49:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.310 10:49:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.568 10:49:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.568 "name": "raid_bdev1", 00:24:12.568 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:12.568 "strip_size_kb": 64, 00:24:12.568 "state": "online", 00:24:12.568 "raid_level": "raid5f", 00:24:12.568 "superblock": true, 00:24:12.568 "num_base_bdevs": 3, 00:24:12.568 "num_base_bdevs_discovered": 3, 00:24:12.568 "num_base_bdevs_operational": 3, 00:24:12.568 "base_bdevs_list": [ 00:24:12.568 { 00:24:12.568 "name": "spare", 00:24:12.568 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:12.568 "is_configured": true, 00:24:12.568 "data_offset": 2048, 00:24:12.568 "data_size": 63488 00:24:12.568 }, 00:24:12.568 { 00:24:12.568 "name": "BaseBdev2", 00:24:12.568 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:12.568 "is_configured": true, 00:24:12.568 "data_offset": 2048, 00:24:12.568 "data_size": 63488 00:24:12.568 }, 00:24:12.568 { 00:24:12.568 "name": "BaseBdev3", 00:24:12.568 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:12.568 "is_configured": true, 00:24:12.568 "data_offset": 2048, 00:24:12.568 "data_size": 63488 00:24:12.568 } 00:24:12.568 ] 00:24:12.568 }' 00:24:12.568 10:49:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.568 10:49:39 -- common/autotest_common.sh@10 -- # set +x 00:24:13.152 10:49:39 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:13.410 [2024-07-24 10:49:40.005481] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:13.410 [2024-07-24 10:49:40.005833] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:13.410 [2024-07-24 10:49:40.006093] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:13.410 [2024-07-24 10:49:40.006328] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:13.410 [2024-07-24 10:49:40.006454] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:24:13.410 10:49:40 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:13.410 10:49:40 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.668 10:49:40 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:13.668 10:49:40 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:13.668 10:49:40 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@12 -- # local i 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.668 10:49:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:13.926 /dev/nbd0 00:24:13.926 10:49:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:13.926 10:49:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:13.926 10:49:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:13.926 10:49:40 -- common/autotest_common.sh@857 -- # local i 00:24:13.926 10:49:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:13.926 10:49:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:13.926 10:49:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:13.926 10:49:40 -- common/autotest_common.sh@861 -- # break 00:24:13.926 10:49:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:13.926 10:49:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:13.926 10:49:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:14.184 1+0 records in 00:24:14.184 1+0 records out 00:24:14.184 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577387 s, 7.1 MB/s 00:24:14.184 10:49:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:14.184 10:49:40 -- common/autotest_common.sh@874 -- # size=4096 00:24:14.184 10:49:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:14.184 10:49:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:14.184 10:49:40 -- common/autotest_common.sh@877 -- # return 0 00:24:14.184 10:49:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:14.184 10:49:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:14.184 10:49:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:14.184 /dev/nbd1 00:24:14.441 10:49:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:14.441 10:49:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:14.441 10:49:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:14.441 10:49:40 -- common/autotest_common.sh@857 -- # local i 00:24:14.441 10:49:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:14.441 10:49:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:14.441 10:49:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:14.441 10:49:40 -- common/autotest_common.sh@861 -- # break 00:24:14.441 10:49:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:14.441 10:49:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:14.441 10:49:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:14.441 1+0 records in 00:24:14.441 1+0 records out 00:24:14.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565949 s, 7.2 MB/s 00:24:14.441 10:49:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:14.441 10:49:40 -- common/autotest_common.sh@874 -- # size=4096 00:24:14.441 10:49:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:14.441 10:49:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:14.441 10:49:40 -- common/autotest_common.sh@877 -- # return 0 00:24:14.441 10:49:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:14.441 10:49:40 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:14.441 10:49:40 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:14.441 10:49:41 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:14.442 10:49:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:14.442 10:49:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:14.442 10:49:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:14.442 10:49:41 -- bdev/nbd_common.sh@51 -- # local i 00:24:14.442 10:49:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:14.442 10:49:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@41 -- # break 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:14.700 10:49:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@41 -- # break 00:24:14.958 10:49:41 -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.958 10:49:41 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:14.958 10:49:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:14.958 10:49:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:14.958 10:49:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:15.524 10:49:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:15.782 [2024-07-24 10:49:42.213693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:15.782 [2024-07-24 10:49:42.215666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:15.782 [2024-07-24 10:49:42.216006] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:15.782 [2024-07-24 10:49:42.216211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:15.782 [2024-07-24 10:49:42.219818] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:15.782 [2024-07-24 10:49:42.220190] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:15.782 [2024-07-24 10:49:42.220537] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:15.782 [2024-07-24 10:49:42.220810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.782 BaseBdev1 00:24:15.782 10:49:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:15.782 10:49:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:15.782 10:49:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:16.039 10:49:42 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:16.298 [2024-07-24 10:49:42.876793] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:16.298 [2024-07-24 10:49:42.877187] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.298 [2024-07-24 10:49:42.877405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:16.298 [2024-07-24 10:49:42.877576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.298 [2024-07-24 10:49:42.878254] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.298 [2024-07-24 10:49:42.878448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:16.298 [2024-07-24 10:49:42.878730] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:16.298 [2024-07-24 10:49:42.878874] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:16.298 [2024-07-24 10:49:42.878984] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:16.298 [2024-07-24 10:49:42.879148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:24:16.298 [2024-07-24 10:49:42.879317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:16.298 BaseBdev2 00:24:16.298 10:49:42 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:16.298 10:49:42 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:16.298 10:49:42 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:16.556 10:49:43 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:16.814 [2024-07-24 10:49:43.444977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:16.814 [2024-07-24 10:49:43.445388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:16.814 [2024-07-24 10:49:43.445488] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:16.814 [2024-07-24 10:49:43.445648] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:16.814 [2024-07-24 10:49:43.446334] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:16.814 [2024-07-24 10:49:43.446540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:16.814 [2024-07-24 10:49:43.446788] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:16.814 [2024-07-24 10:49:43.446951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:16.814 BaseBdev3 00:24:16.814 10:49:43 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:17.380 10:49:43 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:17.380 [2024-07-24 10:49:44.013070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:17.380 [2024-07-24 10:49:44.013433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.380 [2024-07-24 10:49:44.013613] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:17.380 [2024-07-24 10:49:44.013792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.380 [2024-07-24 10:49:44.014457] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.380 [2024-07-24 10:49:44.014666] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:17.380 [2024-07-24 10:49:44.014970] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:17.380 [2024-07-24 10:49:44.015153] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.380 spare 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.380 10:49:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.639 [2024-07-24 10:49:44.115418] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:24:17.639 [2024-07-24 10:49:44.115752] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:17.639 [2024-07-24 10:49:44.116036] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000044230 00:24:17.639 [2024-07-24 10:49:44.117010] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:24:17.639 [2024-07-24 10:49:44.117151] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:24:17.639 [2024-07-24 10:49:44.117494] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.639 10:49:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:17.639 "name": "raid_bdev1", 00:24:17.639 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:17.639 "strip_size_kb": 64, 00:24:17.639 "state": "online", 00:24:17.639 "raid_level": "raid5f", 00:24:17.639 "superblock": true, 00:24:17.639 "num_base_bdevs": 3, 00:24:17.639 "num_base_bdevs_discovered": 3, 00:24:17.639 "num_base_bdevs_operational": 3, 00:24:17.639 "base_bdevs_list": [ 00:24:17.639 { 00:24:17.639 "name": "spare", 00:24:17.639 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:17.639 "is_configured": true, 00:24:17.639 "data_offset": 2048, 00:24:17.639 "data_size": 63488 00:24:17.639 }, 00:24:17.639 { 00:24:17.639 "name": "BaseBdev2", 00:24:17.639 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:17.639 "is_configured": true, 00:24:17.639 "data_offset": 2048, 00:24:17.639 "data_size": 63488 00:24:17.639 }, 00:24:17.639 { 00:24:17.639 "name": "BaseBdev3", 00:24:17.639 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:17.639 "is_configured": true, 00:24:17.639 "data_offset": 2048, 00:24:17.639 "data_size": 63488 00:24:17.639 } 00:24:17.639 ] 00:24:17.639 }' 00:24:17.639 10:49:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:17.639 10:49:44 -- common/autotest_common.sh@10 -- # set +x 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.573 10:49:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.831 "name": "raid_bdev1", 00:24:18.831 "uuid": "e5207389-e872-4cc1-ab03-34db83df7dc8", 00:24:18.831 "strip_size_kb": 64, 00:24:18.831 "state": "online", 00:24:18.831 "raid_level": "raid5f", 00:24:18.831 "superblock": true, 00:24:18.831 "num_base_bdevs": 3, 00:24:18.831 "num_base_bdevs_discovered": 3, 00:24:18.831 "num_base_bdevs_operational": 3, 00:24:18.831 "base_bdevs_list": [ 00:24:18.831 { 00:24:18.831 "name": "spare", 00:24:18.831 "uuid": "e67da742-647d-5979-a00b-daccb55559bd", 00:24:18.831 "is_configured": true, 00:24:18.831 "data_offset": 2048, 00:24:18.831 "data_size": 63488 00:24:18.831 }, 00:24:18.831 { 00:24:18.831 "name": "BaseBdev2", 00:24:18.831 "uuid": "b93232bd-34fd-5c02-a378-c626b660d580", 00:24:18.831 "is_configured": true, 00:24:18.831 "data_offset": 2048, 00:24:18.831 "data_size": 63488 00:24:18.831 }, 00:24:18.831 { 00:24:18.831 "name": "BaseBdev3", 00:24:18.831 "uuid": "4668b2f2-3888-596a-9b1b-6f3a4b87d7db", 00:24:18.831 "is_configured": true, 00:24:18.831 "data_offset": 2048, 00:24:18.831 "data_size": 63488 00:24:18.831 } 00:24:18.831 ] 00:24:18.831 }' 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.831 10:49:45 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:19.090 10:49:45 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:19.090 10:49:45 -- bdev/bdev_raid.sh@709 -- # killprocess 139912 00:24:19.090 10:49:45 -- common/autotest_common.sh@926 -- # '[' -z 139912 ']' 00:24:19.090 10:49:45 -- common/autotest_common.sh@930 -- # kill -0 139912 00:24:19.090 10:49:45 -- common/autotest_common.sh@931 -- # uname 00:24:19.090 10:49:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:19.090 10:49:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 139912 00:24:19.348 killing process with pid 139912 00:24:19.348 Received shutdown signal, test time was about 60.000000 seconds 00:24:19.348 00:24:19.348 Latency(us) 00:24:19.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.348 =================================================================================================================== 00:24:19.348 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.348 10:49:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:19.348 10:49:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:19.348 10:49:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 139912' 00:24:19.348 10:49:45 -- common/autotest_common.sh@945 -- # kill 139912 00:24:19.348 10:49:45 -- common/autotest_common.sh@950 -- # wait 139912 00:24:19.348 [2024-07-24 10:49:45.775567] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:19.348 [2024-07-24 10:49:45.775684] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:19.348 [2024-07-24 10:49:45.775781] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:19.348 [2024-07-24 10:49:45.775794] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:19.348 [2024-07-24 10:49:45.830420] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:19.606 ************************************ 00:24:19.606 END TEST raid5f_rebuild_test_sb 00:24:19.606 ************************************ 00:24:19.606 10:49:46 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:19.607 00:24:19.607 real 0m26.620s 00:24:19.607 user 0m43.280s 00:24:19.607 sys 0m3.333s 00:24:19.607 10:49:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.607 10:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:19.607 10:49:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:19.607 10:49:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.607 10:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:19.607 ************************************ 00:24:19.607 START TEST raid5f_state_function_test 00:24:19.607 ************************************ 00:24:19.607 10:49:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:19.607 Process raid pid: 140572 00:24:19.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=140572 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 140572' 00:24:19.607 10:49:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 140572 /var/tmp/spdk-raid.sock 00:24:19.607 10:49:46 -- common/autotest_common.sh@819 -- # '[' -z 140572 ']' 00:24:19.607 10:49:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:19.607 10:49:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.607 10:49:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:19.607 10:49:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.607 10:49:46 -- common/autotest_common.sh@10 -- # set +x 00:24:19.607 [2024-07-24 10:49:46.206325] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:19.607 [2024-07-24 10:49:46.206602] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.865 [2024-07-24 10:49:46.358143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.865 [2024-07-24 10:49:46.455588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.865 [2024-07-24 10:49:46.509440] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:20.803 10:49:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:20.803 10:49:47 -- common/autotest_common.sh@852 -- # return 0 00:24:20.803 10:49:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:20.803 [2024-07-24 10:49:47.470933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:20.803 [2024-07-24 10:49:47.471051] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:20.803 [2024-07-24 10:49:47.471067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:20.803 [2024-07-24 10:49:47.471088] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:20.803 [2024-07-24 10:49:47.471095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:20.803 [2024-07-24 10:49:47.471150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:20.803 [2024-07-24 10:49:47.471160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:20.803 [2024-07-24 10:49:47.471189] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.061 "name": "Existed_Raid", 00:24:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.061 "strip_size_kb": 64, 00:24:21.061 "state": "configuring", 00:24:21.061 "raid_level": "raid5f", 00:24:21.061 "superblock": false, 00:24:21.061 "num_base_bdevs": 4, 00:24:21.061 "num_base_bdevs_discovered": 0, 00:24:21.061 "num_base_bdevs_operational": 4, 00:24:21.061 "base_bdevs_list": [ 00:24:21.061 { 00:24:21.061 "name": "BaseBdev1", 00:24:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.061 "is_configured": false, 00:24:21.061 "data_offset": 0, 00:24:21.061 "data_size": 0 00:24:21.061 }, 00:24:21.061 { 00:24:21.061 "name": "BaseBdev2", 00:24:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.061 "is_configured": false, 00:24:21.061 "data_offset": 0, 00:24:21.061 "data_size": 0 00:24:21.061 }, 00:24:21.061 { 00:24:21.061 "name": "BaseBdev3", 00:24:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.061 "is_configured": false, 00:24:21.061 "data_offset": 0, 00:24:21.061 "data_size": 0 00:24:21.061 }, 00:24:21.061 { 00:24:21.061 "name": "BaseBdev4", 00:24:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.061 "is_configured": false, 00:24:21.061 "data_offset": 0, 00:24:21.061 "data_size": 0 00:24:21.061 } 00:24:21.061 ] 00:24:21.061 }' 00:24:21.061 10:49:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.061 10:49:47 -- common/autotest_common.sh@10 -- # set +x 00:24:21.995 10:49:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:21.995 [2024-07-24 10:49:48.554967] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:21.995 [2024-07-24 10:49:48.555054] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:21.995 10:49:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:22.252 [2024-07-24 10:49:48.863124] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:22.252 [2024-07-24 10:49:48.863221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:22.252 [2024-07-24 10:49:48.863235] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:22.252 [2024-07-24 10:49:48.863264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:22.252 [2024-07-24 10:49:48.863273] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:22.252 [2024-07-24 10:49:48.863292] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:22.252 [2024-07-24 10:49:48.863299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:22.252 [2024-07-24 10:49:48.863331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:22.252 10:49:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:22.509 [2024-07-24 10:49:49.110932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:22.509 BaseBdev1 00:24:22.509 10:49:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:22.509 10:49:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:22.509 10:49:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:22.509 10:49:49 -- common/autotest_common.sh@889 -- # local i 00:24:22.509 10:49:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:22.509 10:49:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:22.509 10:49:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:22.766 10:49:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:23.025 [ 00:24:23.025 { 00:24:23.025 "name": "BaseBdev1", 00:24:23.025 "aliases": [ 00:24:23.025 "49c76646-e352-4534-9fcd-5938ef2d1d9e" 00:24:23.025 ], 00:24:23.025 "product_name": "Malloc disk", 00:24:23.025 "block_size": 512, 00:24:23.025 "num_blocks": 65536, 00:24:23.025 "uuid": "49c76646-e352-4534-9fcd-5938ef2d1d9e", 00:24:23.025 "assigned_rate_limits": { 00:24:23.025 "rw_ios_per_sec": 0, 00:24:23.025 "rw_mbytes_per_sec": 0, 00:24:23.025 "r_mbytes_per_sec": 0, 00:24:23.025 "w_mbytes_per_sec": 0 00:24:23.025 }, 00:24:23.025 "claimed": true, 00:24:23.025 "claim_type": "exclusive_write", 00:24:23.025 "zoned": false, 00:24:23.025 "supported_io_types": { 00:24:23.025 "read": true, 00:24:23.025 "write": true, 00:24:23.025 "unmap": true, 00:24:23.025 "write_zeroes": true, 00:24:23.025 "flush": true, 00:24:23.025 "reset": true, 00:24:23.025 "compare": false, 00:24:23.025 "compare_and_write": false, 00:24:23.025 "abort": true, 00:24:23.025 "nvme_admin": false, 00:24:23.025 "nvme_io": false 00:24:23.025 }, 00:24:23.025 "memory_domains": [ 00:24:23.025 { 00:24:23.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.025 "dma_device_type": 2 00:24:23.025 } 00:24:23.025 ], 00:24:23.025 "driver_specific": {} 00:24:23.025 } 00:24:23.025 ] 00:24:23.025 10:49:49 -- common/autotest_common.sh@895 -- # return 0 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.025 10:49:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:23.590 10:49:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:23.590 "name": "Existed_Raid", 00:24:23.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.590 "strip_size_kb": 64, 00:24:23.590 "state": "configuring", 00:24:23.590 "raid_level": "raid5f", 00:24:23.590 "superblock": false, 00:24:23.590 "num_base_bdevs": 4, 00:24:23.590 "num_base_bdevs_discovered": 1, 00:24:23.590 "num_base_bdevs_operational": 4, 00:24:23.590 "base_bdevs_list": [ 00:24:23.590 { 00:24:23.590 "name": "BaseBdev1", 00:24:23.590 "uuid": "49c76646-e352-4534-9fcd-5938ef2d1d9e", 00:24:23.590 "is_configured": true, 00:24:23.590 "data_offset": 0, 00:24:23.590 "data_size": 65536 00:24:23.590 }, 00:24:23.590 { 00:24:23.590 "name": "BaseBdev2", 00:24:23.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.590 "is_configured": false, 00:24:23.590 "data_offset": 0, 00:24:23.590 "data_size": 0 00:24:23.590 }, 00:24:23.590 { 00:24:23.590 "name": "BaseBdev3", 00:24:23.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.590 "is_configured": false, 00:24:23.590 "data_offset": 0, 00:24:23.590 "data_size": 0 00:24:23.590 }, 00:24:23.590 { 00:24:23.590 "name": "BaseBdev4", 00:24:23.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:23.590 "is_configured": false, 00:24:23.590 "data_offset": 0, 00:24:23.590 "data_size": 0 00:24:23.590 } 00:24:23.590 ] 00:24:23.590 }' 00:24:23.590 10:49:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:23.590 10:49:50 -- common/autotest_common.sh@10 -- # set +x 00:24:24.163 10:49:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:24.419 [2024-07-24 10:49:51.087477] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:24.419 [2024-07-24 10:49:51.087592] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:24.679 10:49:51 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:24.679 10:49:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:24.679 [2024-07-24 10:49:51.335683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:24.679 [2024-07-24 10:49:51.337983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:24.679 [2024-07-24 10:49:51.338080] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:24.679 [2024-07-24 10:49:51.338095] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:24.679 [2024-07-24 10:49:51.338122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:24.679 [2024-07-24 10:49:51.338132] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:24.679 [2024-07-24 10:49:51.338149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:24.936 10:49:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:24.937 10:49:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.937 10:49:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.937 10:49:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:24.937 "name": "Existed_Raid", 00:24:24.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.937 "strip_size_kb": 64, 00:24:24.937 "state": "configuring", 00:24:24.937 "raid_level": "raid5f", 00:24:24.937 "superblock": false, 00:24:24.937 "num_base_bdevs": 4, 00:24:24.937 "num_base_bdevs_discovered": 1, 00:24:24.937 "num_base_bdevs_operational": 4, 00:24:24.937 "base_bdevs_list": [ 00:24:24.937 { 00:24:24.937 "name": "BaseBdev1", 00:24:24.937 "uuid": "49c76646-e352-4534-9fcd-5938ef2d1d9e", 00:24:24.937 "is_configured": true, 00:24:24.937 "data_offset": 0, 00:24:24.937 "data_size": 65536 00:24:24.937 }, 00:24:24.937 { 00:24:24.937 "name": "BaseBdev2", 00:24:24.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.937 "is_configured": false, 00:24:24.937 "data_offset": 0, 00:24:24.937 "data_size": 0 00:24:24.937 }, 00:24:24.937 { 00:24:24.937 "name": "BaseBdev3", 00:24:24.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.937 "is_configured": false, 00:24:24.937 "data_offset": 0, 00:24:24.937 "data_size": 0 00:24:24.937 }, 00:24:24.937 { 00:24:24.937 "name": "BaseBdev4", 00:24:24.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.937 "is_configured": false, 00:24:24.937 "data_offset": 0, 00:24:24.937 "data_size": 0 00:24:24.937 } 00:24:24.937 ] 00:24:24.937 }' 00:24:24.937 10:49:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:24.937 10:49:51 -- common/autotest_common.sh@10 -- # set +x 00:24:25.888 10:49:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:26.151 [2024-07-24 10:49:52.587650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:26.151 BaseBdev2 00:24:26.151 10:49:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:26.151 10:49:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:26.151 10:49:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:26.151 10:49:52 -- common/autotest_common.sh@889 -- # local i 00:24:26.151 10:49:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:26.151 10:49:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:26.151 10:49:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.409 10:49:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:26.668 [ 00:24:26.668 { 00:24:26.668 "name": "BaseBdev2", 00:24:26.668 "aliases": [ 00:24:26.668 "06cae6b2-0dca-4fac-ae32-bc2c0d84ce3d" 00:24:26.668 ], 00:24:26.668 "product_name": "Malloc disk", 00:24:26.668 "block_size": 512, 00:24:26.668 "num_blocks": 65536, 00:24:26.668 "uuid": "06cae6b2-0dca-4fac-ae32-bc2c0d84ce3d", 00:24:26.668 "assigned_rate_limits": { 00:24:26.668 "rw_ios_per_sec": 0, 00:24:26.668 "rw_mbytes_per_sec": 0, 00:24:26.668 "r_mbytes_per_sec": 0, 00:24:26.668 "w_mbytes_per_sec": 0 00:24:26.668 }, 00:24:26.668 "claimed": true, 00:24:26.668 "claim_type": "exclusive_write", 00:24:26.668 "zoned": false, 00:24:26.668 "supported_io_types": { 00:24:26.668 "read": true, 00:24:26.668 "write": true, 00:24:26.668 "unmap": true, 00:24:26.668 "write_zeroes": true, 00:24:26.668 "flush": true, 00:24:26.668 "reset": true, 00:24:26.668 "compare": false, 00:24:26.668 "compare_and_write": false, 00:24:26.668 "abort": true, 00:24:26.668 "nvme_admin": false, 00:24:26.668 "nvme_io": false 00:24:26.668 }, 00:24:26.668 "memory_domains": [ 00:24:26.668 { 00:24:26.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.668 "dma_device_type": 2 00:24:26.668 } 00:24:26.668 ], 00:24:26.668 "driver_specific": {} 00:24:26.668 } 00:24:26.668 ] 00:24:26.668 10:49:53 -- common/autotest_common.sh@895 -- # return 0 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.668 10:49:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.934 10:49:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.934 "name": "Existed_Raid", 00:24:26.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.934 "strip_size_kb": 64, 00:24:26.934 "state": "configuring", 00:24:26.934 "raid_level": "raid5f", 00:24:26.934 "superblock": false, 00:24:26.934 "num_base_bdevs": 4, 00:24:26.934 "num_base_bdevs_discovered": 2, 00:24:26.934 "num_base_bdevs_operational": 4, 00:24:26.934 "base_bdevs_list": [ 00:24:26.934 { 00:24:26.934 "name": "BaseBdev1", 00:24:26.934 "uuid": "49c76646-e352-4534-9fcd-5938ef2d1d9e", 00:24:26.934 "is_configured": true, 00:24:26.934 "data_offset": 0, 00:24:26.934 "data_size": 65536 00:24:26.934 }, 00:24:26.934 { 00:24:26.934 "name": "BaseBdev2", 00:24:26.934 "uuid": "06cae6b2-0dca-4fac-ae32-bc2c0d84ce3d", 00:24:26.934 "is_configured": true, 00:24:26.934 "data_offset": 0, 00:24:26.934 "data_size": 65536 00:24:26.934 }, 00:24:26.934 { 00:24:26.934 "name": "BaseBdev3", 00:24:26.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.934 "is_configured": false, 00:24:26.934 "data_offset": 0, 00:24:26.934 "data_size": 0 00:24:26.934 }, 00:24:26.934 { 00:24:26.934 "name": "BaseBdev4", 00:24:26.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.934 "is_configured": false, 00:24:26.934 "data_offset": 0, 00:24:26.934 "data_size": 0 00:24:26.934 } 00:24:26.934 ] 00:24:26.934 }' 00:24:26.934 10:49:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.934 10:49:53 -- common/autotest_common.sh@10 -- # set +x 00:24:27.519 10:49:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:27.778 [2024-07-24 10:49:54.385132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:27.778 BaseBdev3 00:24:27.778 10:49:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:27.778 10:49:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:27.778 10:49:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:27.778 10:49:54 -- common/autotest_common.sh@889 -- # local i 00:24:27.778 10:49:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:27.778 10:49:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:27.778 10:49:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:28.036 10:49:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:28.648 [ 00:24:28.648 { 00:24:28.648 "name": "BaseBdev3", 00:24:28.648 "aliases": [ 00:24:28.648 "7a05f5f5-99c6-4396-a011-1736f2fdede2" 00:24:28.648 ], 00:24:28.648 "product_name": "Malloc disk", 00:24:28.648 "block_size": 512, 00:24:28.648 "num_blocks": 65536, 00:24:28.648 "uuid": "7a05f5f5-99c6-4396-a011-1736f2fdede2", 00:24:28.648 "assigned_rate_limits": { 00:24:28.648 "rw_ios_per_sec": 0, 00:24:28.648 "rw_mbytes_per_sec": 0, 00:24:28.648 "r_mbytes_per_sec": 0, 00:24:28.648 "w_mbytes_per_sec": 0 00:24:28.648 }, 00:24:28.648 "claimed": true, 00:24:28.648 "claim_type": "exclusive_write", 00:24:28.648 "zoned": false, 00:24:28.648 "supported_io_types": { 00:24:28.648 "read": true, 00:24:28.648 "write": true, 00:24:28.648 "unmap": true, 00:24:28.648 "write_zeroes": true, 00:24:28.648 "flush": true, 00:24:28.648 "reset": true, 00:24:28.648 "compare": false, 00:24:28.648 "compare_and_write": false, 00:24:28.648 "abort": true, 00:24:28.648 "nvme_admin": false, 00:24:28.648 "nvme_io": false 00:24:28.648 }, 00:24:28.648 "memory_domains": [ 00:24:28.648 { 00:24:28.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.648 "dma_device_type": 2 00:24:28.648 } 00:24:28.648 ], 00:24:28.648 "driver_specific": {} 00:24:28.648 } 00:24:28.648 ] 00:24:28.648 10:49:55 -- common/autotest_common.sh@895 -- # return 0 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.648 10:49:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.906 10:49:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.906 "name": "Existed_Raid", 00:24:28.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.906 "strip_size_kb": 64, 00:24:28.906 "state": "configuring", 00:24:28.906 "raid_level": "raid5f", 00:24:28.906 "superblock": false, 00:24:28.906 "num_base_bdevs": 4, 00:24:28.906 "num_base_bdevs_discovered": 3, 00:24:28.906 "num_base_bdevs_operational": 4, 00:24:28.906 "base_bdevs_list": [ 00:24:28.906 { 00:24:28.906 "name": "BaseBdev1", 00:24:28.906 "uuid": "49c76646-e352-4534-9fcd-5938ef2d1d9e", 00:24:28.906 "is_configured": true, 00:24:28.906 "data_offset": 0, 00:24:28.907 "data_size": 65536 00:24:28.907 }, 00:24:28.907 { 00:24:28.907 "name": "BaseBdev2", 00:24:28.907 "uuid": "06cae6b2-0dca-4fac-ae32-bc2c0d84ce3d", 00:24:28.907 "is_configured": true, 00:24:28.907 "data_offset": 0, 00:24:28.907 "data_size": 65536 00:24:28.907 }, 00:24:28.907 { 00:24:28.907 "name": "BaseBdev3", 00:24:28.907 "uuid": "7a05f5f5-99c6-4396-a011-1736f2fdede2", 00:24:28.907 "is_configured": true, 00:24:28.907 "data_offset": 0, 00:24:28.907 "data_size": 65536 00:24:28.907 }, 00:24:28.907 { 00:24:28.907 "name": "BaseBdev4", 00:24:28.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.907 "is_configured": false, 00:24:28.907 "data_offset": 0, 00:24:28.907 "data_size": 0 00:24:28.907 } 00:24:28.907 ] 00:24:28.907 }' 00:24:28.907 10:49:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.907 10:49:55 -- common/autotest_common.sh@10 -- # set +x 00:24:29.473 10:49:56 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:29.732 [2024-07-24 10:49:56.334922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:29.732 [2024-07-24 10:49:56.335019] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006080 00:24:29.732 [2024-07-24 10:49:56.335034] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:29.732 [2024-07-24 10:49:56.335209] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002120 00:24:29.732 [2024-07-24 10:49:56.336128] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006080 00:24:29.732 [2024-07-24 10:49:56.336155] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006080 00:24:29.732 [2024-07-24 10:49:56.336420] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:29.732 BaseBdev4 00:24:29.732 10:49:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:29.732 10:49:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:29.732 10:49:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:29.732 10:49:56 -- common/autotest_common.sh@889 -- # local i 00:24:29.732 10:49:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:29.732 10:49:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:29.732 10:49:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.990 10:49:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:30.248 [ 00:24:30.248 { 00:24:30.248 "name": "BaseBdev4", 00:24:30.248 "aliases": [ 00:24:30.248 "2ca3bd67-3983-41a3-835e-cc9331e58131" 00:24:30.248 ], 00:24:30.248 "product_name": "Malloc disk", 00:24:30.248 "block_size": 512, 00:24:30.248 "num_blocks": 65536, 00:24:30.248 "uuid": "2ca3bd67-3983-41a3-835e-cc9331e58131", 00:24:30.248 "assigned_rate_limits": { 00:24:30.248 "rw_ios_per_sec": 0, 00:24:30.248 "rw_mbytes_per_sec": 0, 00:24:30.248 "r_mbytes_per_sec": 0, 00:24:30.248 "w_mbytes_per_sec": 0 00:24:30.248 }, 00:24:30.248 "claimed": true, 00:24:30.248 "claim_type": "exclusive_write", 00:24:30.248 "zoned": false, 00:24:30.248 "supported_io_types": { 00:24:30.248 "read": true, 00:24:30.248 "write": true, 00:24:30.248 "unmap": true, 00:24:30.248 "write_zeroes": true, 00:24:30.248 "flush": true, 00:24:30.248 "reset": true, 00:24:30.248 "compare": false, 00:24:30.248 "compare_and_write": false, 00:24:30.248 "abort": true, 00:24:30.248 "nvme_admin": false, 00:24:30.248 "nvme_io": false 00:24:30.248 }, 00:24:30.248 "memory_domains": [ 00:24:30.248 { 00:24:30.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.248 "dma_device_type": 2 00:24:30.248 } 00:24:30.248 ], 00:24:30.248 "driver_specific": {} 00:24:30.248 } 00:24:30.248 ] 00:24:30.248 10:49:56 -- common/autotest_common.sh@895 -- # return 0 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.248 10:49:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.815 10:49:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.815 "name": "Existed_Raid", 00:24:30.815 "uuid": "00bbd563-2782-4cf3-8dad-d317f9824594", 00:24:30.815 "strip_size_kb": 64, 00:24:30.815 "state": "online", 00:24:30.815 "raid_level": "raid5f", 00:24:30.815 "superblock": false, 00:24:30.815 "num_base_bdevs": 4, 00:24:30.815 "num_base_bdevs_discovered": 4, 00:24:30.815 "num_base_bdevs_operational": 4, 00:24:30.815 "base_bdevs_list": [ 00:24:30.815 { 00:24:30.815 "name": "BaseBdev1", 00:24:30.815 "uuid": "49c76646-e352-4534-9fcd-5938ef2d1d9e", 00:24:30.815 "is_configured": true, 00:24:30.815 "data_offset": 0, 00:24:30.815 "data_size": 65536 00:24:30.815 }, 00:24:30.815 { 00:24:30.815 "name": "BaseBdev2", 00:24:30.815 "uuid": "06cae6b2-0dca-4fac-ae32-bc2c0d84ce3d", 00:24:30.815 "is_configured": true, 00:24:30.815 "data_offset": 0, 00:24:30.815 "data_size": 65536 00:24:30.815 }, 00:24:30.815 { 00:24:30.815 "name": "BaseBdev3", 00:24:30.815 "uuid": "7a05f5f5-99c6-4396-a011-1736f2fdede2", 00:24:30.815 "is_configured": true, 00:24:30.815 "data_offset": 0, 00:24:30.815 "data_size": 65536 00:24:30.815 }, 00:24:30.815 { 00:24:30.815 "name": "BaseBdev4", 00:24:30.815 "uuid": "2ca3bd67-3983-41a3-835e-cc9331e58131", 00:24:30.815 "is_configured": true, 00:24:30.815 "data_offset": 0, 00:24:30.815 "data_size": 65536 00:24:30.815 } 00:24:30.815 ] 00:24:30.815 }' 00:24:30.815 10:49:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.815 10:49:57 -- common/autotest_common.sh@10 -- # set +x 00:24:31.382 10:49:57 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:31.641 [2024-07-24 10:49:58.131616] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.641 10:49:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.899 10:49:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:31.899 "name": "Existed_Raid", 00:24:31.899 "uuid": "00bbd563-2782-4cf3-8dad-d317f9824594", 00:24:31.899 "strip_size_kb": 64, 00:24:31.899 "state": "online", 00:24:31.899 "raid_level": "raid5f", 00:24:31.899 "superblock": false, 00:24:31.899 "num_base_bdevs": 4, 00:24:31.899 "num_base_bdevs_discovered": 3, 00:24:31.899 "num_base_bdevs_operational": 3, 00:24:31.899 "base_bdevs_list": [ 00:24:31.899 { 00:24:31.899 "name": null, 00:24:31.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.899 "is_configured": false, 00:24:31.899 "data_offset": 0, 00:24:31.899 "data_size": 65536 00:24:31.899 }, 00:24:31.899 { 00:24:31.899 "name": "BaseBdev2", 00:24:31.899 "uuid": "06cae6b2-0dca-4fac-ae32-bc2c0d84ce3d", 00:24:31.899 "is_configured": true, 00:24:31.899 "data_offset": 0, 00:24:31.899 "data_size": 65536 00:24:31.899 }, 00:24:31.899 { 00:24:31.899 "name": "BaseBdev3", 00:24:31.899 "uuid": "7a05f5f5-99c6-4396-a011-1736f2fdede2", 00:24:31.899 "is_configured": true, 00:24:31.899 "data_offset": 0, 00:24:31.899 "data_size": 65536 00:24:31.899 }, 00:24:31.899 { 00:24:31.899 "name": "BaseBdev4", 00:24:31.899 "uuid": "2ca3bd67-3983-41a3-835e-cc9331e58131", 00:24:31.899 "is_configured": true, 00:24:31.899 "data_offset": 0, 00:24:31.899 "data_size": 65536 00:24:31.899 } 00:24:31.899 ] 00:24:31.899 }' 00:24:31.899 10:49:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:31.899 10:49:58 -- common/autotest_common.sh@10 -- # set +x 00:24:32.511 10:49:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:32.511 10:49:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:32.511 10:49:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.511 10:49:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:32.768 10:49:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:32.768 10:49:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:32.768 10:49:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:33.027 [2024-07-24 10:49:59.658203] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:33.027 [2024-07-24 10:49:59.658262] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.027 [2024-07-24 10:49:59.658367] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.027 10:49:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:33.027 10:49:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:33.027 10:49:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.027 10:49:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:33.594 10:49:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:33.594 10:49:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:33.594 10:49:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:33.594 [2024-07-24 10:50:00.252250] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:33.853 10:50:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:33.853 10:50:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:33.853 10:50:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.853 10:50:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:34.112 10:50:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:34.112 10:50:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:34.112 10:50:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:34.371 [2024-07-24 10:50:00.850171] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:34.371 [2024-07-24 10:50:00.850256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006080 name Existed_Raid, state offline 00:24:34.371 10:50:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:34.371 10:50:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:34.371 10:50:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.371 10:50:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:34.629 10:50:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:34.629 10:50:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:34.629 10:50:01 -- bdev/bdev_raid.sh@287 -- # killprocess 140572 00:24:34.629 10:50:01 -- common/autotest_common.sh@926 -- # '[' -z 140572 ']' 00:24:34.629 10:50:01 -- common/autotest_common.sh@930 -- # kill -0 140572 00:24:34.629 10:50:01 -- common/autotest_common.sh@931 -- # uname 00:24:34.629 10:50:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:34.629 10:50:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140572 00:24:34.629 10:50:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:34.629 10:50:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:34.629 10:50:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140572' 00:24:34.629 killing process with pid 140572 00:24:34.629 10:50:01 -- common/autotest_common.sh@945 -- # kill 140572 00:24:34.629 10:50:01 -- common/autotest_common.sh@950 -- # wait 140572 00:24:34.629 [2024-07-24 10:50:01.156530] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:34.629 [2024-07-24 10:50:01.156635] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:34.886 ************************************ 00:24:34.886 END TEST raid5f_state_function_test 00:24:34.886 ************************************ 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:34.886 00:24:34.886 real 0m15.252s 00:24:34.886 user 0m28.413s 00:24:34.886 sys 0m1.794s 00:24:34.886 10:50:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.886 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:34.886 10:50:01 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:34.886 10:50:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:34.886 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:24:34.886 ************************************ 00:24:34.886 START TEST raid5f_state_function_test_sb 00:24:34.886 ************************************ 00:24:34.886 10:50:01 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.886 10:50:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=141017 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 141017' 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:34.887 Process raid pid: 141017 00:24:34.887 10:50:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 141017 /var/tmp/spdk-raid.sock 00:24:34.887 10:50:01 -- common/autotest_common.sh@819 -- # '[' -z 141017 ']' 00:24:34.887 10:50:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:34.887 10:50:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:34.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:34.887 10:50:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:34.887 10:50:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:34.887 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:24:34.887 [2024-07-24 10:50:01.519175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:34.887 [2024-07-24 10:50:01.519441] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.144 [2024-07-24 10:50:01.669048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.144 [2024-07-24 10:50:01.799666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.402 [2024-07-24 10:50:01.877404] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.969 10:50:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:35.969 10:50:02 -- common/autotest_common.sh@852 -- # return 0 00:24:35.969 10:50:02 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:36.228 [2024-07-24 10:50:02.811558] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:36.228 [2024-07-24 10:50:02.811902] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:36.228 [2024-07-24 10:50:02.811966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:36.228 [2024-07-24 10:50:02.812022] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:36.228 [2024-07-24 10:50:02.812060] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:36.228 [2024-07-24 10:50:02.812144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:36.228 [2024-07-24 10:50:02.812183] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:36.228 [2024-07-24 10:50:02.812240] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.228 10:50:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.486 10:50:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.486 "name": "Existed_Raid", 00:24:36.486 "uuid": "a7dd75ef-ed21-4592-a988-7d93f3561e67", 00:24:36.486 "strip_size_kb": 64, 00:24:36.486 "state": "configuring", 00:24:36.486 "raid_level": "raid5f", 00:24:36.486 "superblock": true, 00:24:36.486 "num_base_bdevs": 4, 00:24:36.486 "num_base_bdevs_discovered": 0, 00:24:36.486 "num_base_bdevs_operational": 4, 00:24:36.486 "base_bdevs_list": [ 00:24:36.486 { 00:24:36.486 "name": "BaseBdev1", 00:24:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.486 "is_configured": false, 00:24:36.486 "data_offset": 0, 00:24:36.486 "data_size": 0 00:24:36.486 }, 00:24:36.486 { 00:24:36.486 "name": "BaseBdev2", 00:24:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.486 "is_configured": false, 00:24:36.486 "data_offset": 0, 00:24:36.486 "data_size": 0 00:24:36.486 }, 00:24:36.486 { 00:24:36.486 "name": "BaseBdev3", 00:24:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.486 "is_configured": false, 00:24:36.486 "data_offset": 0, 00:24:36.486 "data_size": 0 00:24:36.486 }, 00:24:36.486 { 00:24:36.486 "name": "BaseBdev4", 00:24:36.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.486 "is_configured": false, 00:24:36.486 "data_offset": 0, 00:24:36.486 "data_size": 0 00:24:36.486 } 00:24:36.486 ] 00:24:36.486 }' 00:24:36.486 10:50:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.486 10:50:03 -- common/autotest_common.sh@10 -- # set +x 00:24:37.427 10:50:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:37.427 [2024-07-24 10:50:04.039615] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:37.427 [2024-07-24 10:50:04.040009] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005480 name Existed_Raid, state configuring 00:24:37.427 10:50:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:37.696 [2024-07-24 10:50:04.315764] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:37.696 [2024-07-24 10:50:04.316205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:37.696 [2024-07-24 10:50:04.316327] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:37.696 [2024-07-24 10:50:04.316481] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:37.696 [2024-07-24 10:50:04.316603] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:37.696 [2024-07-24 10:50:04.316671] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:37.696 [2024-07-24 10:50:04.316922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:37.696 [2024-07-24 10:50:04.316998] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:37.696 10:50:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:37.954 [2024-07-24 10:50:04.598989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.954 BaseBdev1 00:24:37.954 10:50:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:37.954 10:50:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:37.954 10:50:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:37.954 10:50:04 -- common/autotest_common.sh@889 -- # local i 00:24:37.954 10:50:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:37.954 10:50:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:37.954 10:50:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:38.212 10:50:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:38.470 [ 00:24:38.470 { 00:24:38.470 "name": "BaseBdev1", 00:24:38.470 "aliases": [ 00:24:38.470 "08299ad8-a58d-4cb8-be6b-4b3d7d6ffee7" 00:24:38.470 ], 00:24:38.470 "product_name": "Malloc disk", 00:24:38.470 "block_size": 512, 00:24:38.470 "num_blocks": 65536, 00:24:38.470 "uuid": "08299ad8-a58d-4cb8-be6b-4b3d7d6ffee7", 00:24:38.470 "assigned_rate_limits": { 00:24:38.470 "rw_ios_per_sec": 0, 00:24:38.470 "rw_mbytes_per_sec": 0, 00:24:38.470 "r_mbytes_per_sec": 0, 00:24:38.470 "w_mbytes_per_sec": 0 00:24:38.470 }, 00:24:38.470 "claimed": true, 00:24:38.470 "claim_type": "exclusive_write", 00:24:38.470 "zoned": false, 00:24:38.470 "supported_io_types": { 00:24:38.470 "read": true, 00:24:38.470 "write": true, 00:24:38.470 "unmap": true, 00:24:38.470 "write_zeroes": true, 00:24:38.470 "flush": true, 00:24:38.470 "reset": true, 00:24:38.470 "compare": false, 00:24:38.470 "compare_and_write": false, 00:24:38.470 "abort": true, 00:24:38.470 "nvme_admin": false, 00:24:38.470 "nvme_io": false 00:24:38.470 }, 00:24:38.470 "memory_domains": [ 00:24:38.470 { 00:24:38.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.470 "dma_device_type": 2 00:24:38.470 } 00:24:38.470 ], 00:24:38.470 "driver_specific": {} 00:24:38.470 } 00:24:38.470 ] 00:24:38.470 10:50:05 -- common/autotest_common.sh@895 -- # return 0 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:38.470 10:50:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:38.729 10:50:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.729 10:50:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.729 10:50:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.729 "name": "Existed_Raid", 00:24:38.729 "uuid": "f2295dfd-28a7-4f04-bdd1-a0427f445499", 00:24:38.729 "strip_size_kb": 64, 00:24:38.729 "state": "configuring", 00:24:38.729 "raid_level": "raid5f", 00:24:38.729 "superblock": true, 00:24:38.729 "num_base_bdevs": 4, 00:24:38.729 "num_base_bdevs_discovered": 1, 00:24:38.729 "num_base_bdevs_operational": 4, 00:24:38.729 "base_bdevs_list": [ 00:24:38.729 { 00:24:38.729 "name": "BaseBdev1", 00:24:38.729 "uuid": "08299ad8-a58d-4cb8-be6b-4b3d7d6ffee7", 00:24:38.729 "is_configured": true, 00:24:38.729 "data_offset": 2048, 00:24:38.729 "data_size": 63488 00:24:38.729 }, 00:24:38.729 { 00:24:38.729 "name": "BaseBdev2", 00:24:38.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.729 "is_configured": false, 00:24:38.729 "data_offset": 0, 00:24:38.729 "data_size": 0 00:24:38.729 }, 00:24:38.729 { 00:24:38.729 "name": "BaseBdev3", 00:24:38.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.729 "is_configured": false, 00:24:38.729 "data_offset": 0, 00:24:38.729 "data_size": 0 00:24:38.729 }, 00:24:38.729 { 00:24:38.729 "name": "BaseBdev4", 00:24:38.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.729 "is_configured": false, 00:24:38.729 "data_offset": 0, 00:24:38.729 "data_size": 0 00:24:38.729 } 00:24:38.729 ] 00:24:38.729 }' 00:24:38.729 10:50:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.729 10:50:05 -- common/autotest_common.sh@10 -- # set +x 00:24:39.664 10:50:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:39.664 [2024-07-24 10:50:06.267434] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:39.664 [2024-07-24 10:50:06.267858] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000005780 name Existed_Raid, state configuring 00:24:39.664 10:50:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:39.664 10:50:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:39.922 10:50:06 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:40.181 BaseBdev1 00:24:40.181 10:50:06 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:40.181 10:50:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:40.181 10:50:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:40.181 10:50:06 -- common/autotest_common.sh@889 -- # local i 00:24:40.181 10:50:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:40.181 10:50:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:40.181 10:50:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:40.440 10:50:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:40.698 [ 00:24:40.698 { 00:24:40.698 "name": "BaseBdev1", 00:24:40.698 "aliases": [ 00:24:40.698 "edace901-97c6-423b-8ada-148b281d6456" 00:24:40.698 ], 00:24:40.698 "product_name": "Malloc disk", 00:24:40.698 "block_size": 512, 00:24:40.698 "num_blocks": 65536, 00:24:40.698 "uuid": "edace901-97c6-423b-8ada-148b281d6456", 00:24:40.698 "assigned_rate_limits": { 00:24:40.698 "rw_ios_per_sec": 0, 00:24:40.698 "rw_mbytes_per_sec": 0, 00:24:40.698 "r_mbytes_per_sec": 0, 00:24:40.698 "w_mbytes_per_sec": 0 00:24:40.698 }, 00:24:40.698 "claimed": false, 00:24:40.698 "zoned": false, 00:24:40.698 "supported_io_types": { 00:24:40.698 "read": true, 00:24:40.698 "write": true, 00:24:40.698 "unmap": true, 00:24:40.698 "write_zeroes": true, 00:24:40.698 "flush": true, 00:24:40.698 "reset": true, 00:24:40.698 "compare": false, 00:24:40.698 "compare_and_write": false, 00:24:40.698 "abort": true, 00:24:40.698 "nvme_admin": false, 00:24:40.698 "nvme_io": false 00:24:40.698 }, 00:24:40.698 "memory_domains": [ 00:24:40.698 { 00:24:40.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.698 "dma_device_type": 2 00:24:40.698 } 00:24:40.698 ], 00:24:40.698 "driver_specific": {} 00:24:40.698 } 00:24:40.698 ] 00:24:40.698 10:50:07 -- common/autotest_common.sh@895 -- # return 0 00:24:40.698 10:50:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:40.958 [2024-07-24 10:50:07.586377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:40.958 [2024-07-24 10:50:07.589243] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:40.958 [2024-07-24 10:50:07.589568] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:40.958 [2024-07-24 10:50:07.589708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:40.958 [2024-07-24 10:50:07.589783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:40.958 [2024-07-24 10:50:07.589894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:40.958 [2024-07-24 10:50:07.589960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.958 10:50:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.218 10:50:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.218 "name": "Existed_Raid", 00:24:41.218 "uuid": "996dabab-741e-46e5-9661-a95e3d629502", 00:24:41.218 "strip_size_kb": 64, 00:24:41.218 "state": "configuring", 00:24:41.218 "raid_level": "raid5f", 00:24:41.218 "superblock": true, 00:24:41.218 "num_base_bdevs": 4, 00:24:41.218 "num_base_bdevs_discovered": 1, 00:24:41.218 "num_base_bdevs_operational": 4, 00:24:41.218 "base_bdevs_list": [ 00:24:41.218 { 00:24:41.218 "name": "BaseBdev1", 00:24:41.218 "uuid": "edace901-97c6-423b-8ada-148b281d6456", 00:24:41.218 "is_configured": true, 00:24:41.218 "data_offset": 2048, 00:24:41.218 "data_size": 63488 00:24:41.218 }, 00:24:41.218 { 00:24:41.218 "name": "BaseBdev2", 00:24:41.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.218 "is_configured": false, 00:24:41.218 "data_offset": 0, 00:24:41.218 "data_size": 0 00:24:41.218 }, 00:24:41.218 { 00:24:41.218 "name": "BaseBdev3", 00:24:41.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.218 "is_configured": false, 00:24:41.218 "data_offset": 0, 00:24:41.218 "data_size": 0 00:24:41.218 }, 00:24:41.218 { 00:24:41.218 "name": "BaseBdev4", 00:24:41.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.218 "is_configured": false, 00:24:41.218 "data_offset": 0, 00:24:41.218 "data_size": 0 00:24:41.218 } 00:24:41.218 ] 00:24:41.218 }' 00:24:41.218 10:50:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.218 10:50:07 -- common/autotest_common.sh@10 -- # set +x 00:24:42.156 10:50:08 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:42.156 [2024-07-24 10:50:08.768993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:42.156 BaseBdev2 00:24:42.156 10:50:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:42.156 10:50:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:42.156 10:50:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:42.156 10:50:08 -- common/autotest_common.sh@889 -- # local i 00:24:42.156 10:50:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:42.156 10:50:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:42.156 10:50:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.414 10:50:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:42.672 [ 00:24:42.672 { 00:24:42.672 "name": "BaseBdev2", 00:24:42.672 "aliases": [ 00:24:42.672 "2bce0f51-5648-468a-9ec4-e5c11e9964c4" 00:24:42.672 ], 00:24:42.672 "product_name": "Malloc disk", 00:24:42.672 "block_size": 512, 00:24:42.672 "num_blocks": 65536, 00:24:42.672 "uuid": "2bce0f51-5648-468a-9ec4-e5c11e9964c4", 00:24:42.672 "assigned_rate_limits": { 00:24:42.672 "rw_ios_per_sec": 0, 00:24:42.672 "rw_mbytes_per_sec": 0, 00:24:42.672 "r_mbytes_per_sec": 0, 00:24:42.672 "w_mbytes_per_sec": 0 00:24:42.672 }, 00:24:42.672 "claimed": true, 00:24:42.672 "claim_type": "exclusive_write", 00:24:42.672 "zoned": false, 00:24:42.672 "supported_io_types": { 00:24:42.672 "read": true, 00:24:42.672 "write": true, 00:24:42.672 "unmap": true, 00:24:42.672 "write_zeroes": true, 00:24:42.672 "flush": true, 00:24:42.672 "reset": true, 00:24:42.672 "compare": false, 00:24:42.672 "compare_and_write": false, 00:24:42.672 "abort": true, 00:24:42.672 "nvme_admin": false, 00:24:42.672 "nvme_io": false 00:24:42.672 }, 00:24:42.672 "memory_domains": [ 00:24:42.672 { 00:24:42.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.672 "dma_device_type": 2 00:24:42.672 } 00:24:42.672 ], 00:24:42.672 "driver_specific": {} 00:24:42.672 } 00:24:42.672 ] 00:24:42.672 10:50:09 -- common/autotest_common.sh@895 -- # return 0 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.672 10:50:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.930 10:50:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:42.930 "name": "Existed_Raid", 00:24:42.930 "uuid": "996dabab-741e-46e5-9661-a95e3d629502", 00:24:42.931 "strip_size_kb": 64, 00:24:42.931 "state": "configuring", 00:24:42.931 "raid_level": "raid5f", 00:24:42.931 "superblock": true, 00:24:42.931 "num_base_bdevs": 4, 00:24:42.931 "num_base_bdevs_discovered": 2, 00:24:42.931 "num_base_bdevs_operational": 4, 00:24:42.931 "base_bdevs_list": [ 00:24:42.931 { 00:24:42.931 "name": "BaseBdev1", 00:24:42.931 "uuid": "edace901-97c6-423b-8ada-148b281d6456", 00:24:42.931 "is_configured": true, 00:24:42.931 "data_offset": 2048, 00:24:42.931 "data_size": 63488 00:24:42.931 }, 00:24:42.931 { 00:24:42.931 "name": "BaseBdev2", 00:24:42.931 "uuid": "2bce0f51-5648-468a-9ec4-e5c11e9964c4", 00:24:42.931 "is_configured": true, 00:24:42.931 "data_offset": 2048, 00:24:42.931 "data_size": 63488 00:24:42.931 }, 00:24:42.931 { 00:24:42.931 "name": "BaseBdev3", 00:24:42.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.931 "is_configured": false, 00:24:42.931 "data_offset": 0, 00:24:42.931 "data_size": 0 00:24:42.931 }, 00:24:42.931 { 00:24:42.931 "name": "BaseBdev4", 00:24:42.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.931 "is_configured": false, 00:24:42.931 "data_offset": 0, 00:24:42.931 "data_size": 0 00:24:42.931 } 00:24:42.931 ] 00:24:42.931 }' 00:24:42.931 10:50:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:42.931 10:50:09 -- common/autotest_common.sh@10 -- # set +x 00:24:43.864 10:50:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:43.865 [2024-07-24 10:50:10.537329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:43.865 BaseBdev3 00:24:44.122 10:50:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:44.122 10:50:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:44.122 10:50:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:44.122 10:50:10 -- common/autotest_common.sh@889 -- # local i 00:24:44.122 10:50:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:44.122 10:50:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:44.122 10:50:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.122 10:50:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:44.380 [ 00:24:44.380 { 00:24:44.380 "name": "BaseBdev3", 00:24:44.380 "aliases": [ 00:24:44.380 "622461a9-8ce9-47d2-b754-376ff6f281d9" 00:24:44.380 ], 00:24:44.380 "product_name": "Malloc disk", 00:24:44.380 "block_size": 512, 00:24:44.380 "num_blocks": 65536, 00:24:44.380 "uuid": "622461a9-8ce9-47d2-b754-376ff6f281d9", 00:24:44.380 "assigned_rate_limits": { 00:24:44.380 "rw_ios_per_sec": 0, 00:24:44.380 "rw_mbytes_per_sec": 0, 00:24:44.380 "r_mbytes_per_sec": 0, 00:24:44.380 "w_mbytes_per_sec": 0 00:24:44.380 }, 00:24:44.380 "claimed": true, 00:24:44.380 "claim_type": "exclusive_write", 00:24:44.380 "zoned": false, 00:24:44.380 "supported_io_types": { 00:24:44.380 "read": true, 00:24:44.380 "write": true, 00:24:44.380 "unmap": true, 00:24:44.380 "write_zeroes": true, 00:24:44.380 "flush": true, 00:24:44.380 "reset": true, 00:24:44.380 "compare": false, 00:24:44.380 "compare_and_write": false, 00:24:44.380 "abort": true, 00:24:44.380 "nvme_admin": false, 00:24:44.380 "nvme_io": false 00:24:44.380 }, 00:24:44.380 "memory_domains": [ 00:24:44.380 { 00:24:44.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.380 "dma_device_type": 2 00:24:44.380 } 00:24:44.380 ], 00:24:44.380 "driver_specific": {} 00:24:44.380 } 00:24:44.380 ] 00:24:44.380 10:50:11 -- common/autotest_common.sh@895 -- # return 0 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.380 10:50:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.638 10:50:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:44.638 "name": "Existed_Raid", 00:24:44.638 "uuid": "996dabab-741e-46e5-9661-a95e3d629502", 00:24:44.638 "strip_size_kb": 64, 00:24:44.638 "state": "configuring", 00:24:44.638 "raid_level": "raid5f", 00:24:44.638 "superblock": true, 00:24:44.638 "num_base_bdevs": 4, 00:24:44.638 "num_base_bdevs_discovered": 3, 00:24:44.638 "num_base_bdevs_operational": 4, 00:24:44.638 "base_bdevs_list": [ 00:24:44.638 { 00:24:44.638 "name": "BaseBdev1", 00:24:44.638 "uuid": "edace901-97c6-423b-8ada-148b281d6456", 00:24:44.638 "is_configured": true, 00:24:44.638 "data_offset": 2048, 00:24:44.638 "data_size": 63488 00:24:44.638 }, 00:24:44.638 { 00:24:44.638 "name": "BaseBdev2", 00:24:44.638 "uuid": "2bce0f51-5648-468a-9ec4-e5c11e9964c4", 00:24:44.638 "is_configured": true, 00:24:44.638 "data_offset": 2048, 00:24:44.638 "data_size": 63488 00:24:44.638 }, 00:24:44.638 { 00:24:44.638 "name": "BaseBdev3", 00:24:44.638 "uuid": "622461a9-8ce9-47d2-b754-376ff6f281d9", 00:24:44.638 "is_configured": true, 00:24:44.638 "data_offset": 2048, 00:24:44.638 "data_size": 63488 00:24:44.638 }, 00:24:44.638 { 00:24:44.638 "name": "BaseBdev4", 00:24:44.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.638 "is_configured": false, 00:24:44.638 "data_offset": 0, 00:24:44.638 "data_size": 0 00:24:44.638 } 00:24:44.638 ] 00:24:44.638 }' 00:24:44.638 10:50:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:44.638 10:50:11 -- common/autotest_common.sh@10 -- # set +x 00:24:45.572 10:50:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:45.572 [2024-07-24 10:50:12.231135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:45.572 [2024-07-24 10:50:12.231928] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006680 00:24:45.572 [2024-07-24 10:50:12.232154] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:45.572 [2024-07-24 10:50:12.232515] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:24:45.572 BaseBdev4 00:24:45.572 [2024-07-24 10:50:12.233716] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006680 00:24:45.572 [2024-07-24 10:50:12.233927] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006680 00:24:45.572 [2024-07-24 10:50:12.234271] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.572 10:50:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:45.572 10:50:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:45.572 10:50:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:45.572 10:50:12 -- common/autotest_common.sh@889 -- # local i 00:24:45.572 10:50:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:45.572 10:50:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:45.572 10:50:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:46.167 10:50:12 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:46.167 [ 00:24:46.167 { 00:24:46.167 "name": "BaseBdev4", 00:24:46.167 "aliases": [ 00:24:46.167 "9aec0ffe-58d2-493c-bad4-70bdfc376440" 00:24:46.167 ], 00:24:46.167 "product_name": "Malloc disk", 00:24:46.167 "block_size": 512, 00:24:46.167 "num_blocks": 65536, 00:24:46.167 "uuid": "9aec0ffe-58d2-493c-bad4-70bdfc376440", 00:24:46.167 "assigned_rate_limits": { 00:24:46.167 "rw_ios_per_sec": 0, 00:24:46.167 "rw_mbytes_per_sec": 0, 00:24:46.167 "r_mbytes_per_sec": 0, 00:24:46.167 "w_mbytes_per_sec": 0 00:24:46.167 }, 00:24:46.167 "claimed": true, 00:24:46.167 "claim_type": "exclusive_write", 00:24:46.167 "zoned": false, 00:24:46.167 "supported_io_types": { 00:24:46.167 "read": true, 00:24:46.167 "write": true, 00:24:46.167 "unmap": true, 00:24:46.167 "write_zeroes": true, 00:24:46.167 "flush": true, 00:24:46.167 "reset": true, 00:24:46.167 "compare": false, 00:24:46.167 "compare_and_write": false, 00:24:46.167 "abort": true, 00:24:46.167 "nvme_admin": false, 00:24:46.167 "nvme_io": false 00:24:46.167 }, 00:24:46.167 "memory_domains": [ 00:24:46.167 { 00:24:46.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.167 "dma_device_type": 2 00:24:46.167 } 00:24:46.168 ], 00:24:46.168 "driver_specific": {} 00:24:46.168 } 00:24:46.168 ] 00:24:46.168 10:50:12 -- common/autotest_common.sh@895 -- # return 0 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:46.168 10:50:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.437 10:50:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:46.437 "name": "Existed_Raid", 00:24:46.437 "uuid": "996dabab-741e-46e5-9661-a95e3d629502", 00:24:46.437 "strip_size_kb": 64, 00:24:46.437 "state": "online", 00:24:46.437 "raid_level": "raid5f", 00:24:46.437 "superblock": true, 00:24:46.437 "num_base_bdevs": 4, 00:24:46.437 "num_base_bdevs_discovered": 4, 00:24:46.437 "num_base_bdevs_operational": 4, 00:24:46.437 "base_bdevs_list": [ 00:24:46.437 { 00:24:46.437 "name": "BaseBdev1", 00:24:46.437 "uuid": "edace901-97c6-423b-8ada-148b281d6456", 00:24:46.437 "is_configured": true, 00:24:46.437 "data_offset": 2048, 00:24:46.437 "data_size": 63488 00:24:46.437 }, 00:24:46.437 { 00:24:46.437 "name": "BaseBdev2", 00:24:46.437 "uuid": "2bce0f51-5648-468a-9ec4-e5c11e9964c4", 00:24:46.437 "is_configured": true, 00:24:46.437 "data_offset": 2048, 00:24:46.437 "data_size": 63488 00:24:46.437 }, 00:24:46.437 { 00:24:46.437 "name": "BaseBdev3", 00:24:46.437 "uuid": "622461a9-8ce9-47d2-b754-376ff6f281d9", 00:24:46.437 "is_configured": true, 00:24:46.437 "data_offset": 2048, 00:24:46.437 "data_size": 63488 00:24:46.437 }, 00:24:46.437 { 00:24:46.437 "name": "BaseBdev4", 00:24:46.437 "uuid": "9aec0ffe-58d2-493c-bad4-70bdfc376440", 00:24:46.437 "is_configured": true, 00:24:46.437 "data_offset": 2048, 00:24:46.437 "data_size": 63488 00:24:46.437 } 00:24:46.437 ] 00:24:46.437 }' 00:24:46.437 10:50:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:46.437 10:50:13 -- common/autotest_common.sh@10 -- # set +x 00:24:47.003 10:50:13 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:47.261 [2024-07-24 10:50:13.933365] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.519 10:50:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.777 10:50:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.777 "name": "Existed_Raid", 00:24:47.777 "uuid": "996dabab-741e-46e5-9661-a95e3d629502", 00:24:47.777 "strip_size_kb": 64, 00:24:47.777 "state": "online", 00:24:47.777 "raid_level": "raid5f", 00:24:47.777 "superblock": true, 00:24:47.777 "num_base_bdevs": 4, 00:24:47.777 "num_base_bdevs_discovered": 3, 00:24:47.777 "num_base_bdevs_operational": 3, 00:24:47.777 "base_bdevs_list": [ 00:24:47.777 { 00:24:47.777 "name": null, 00:24:47.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.777 "is_configured": false, 00:24:47.777 "data_offset": 2048, 00:24:47.777 "data_size": 63488 00:24:47.777 }, 00:24:47.777 { 00:24:47.777 "name": "BaseBdev2", 00:24:47.777 "uuid": "2bce0f51-5648-468a-9ec4-e5c11e9964c4", 00:24:47.777 "is_configured": true, 00:24:47.777 "data_offset": 2048, 00:24:47.777 "data_size": 63488 00:24:47.777 }, 00:24:47.777 { 00:24:47.777 "name": "BaseBdev3", 00:24:47.777 "uuid": "622461a9-8ce9-47d2-b754-376ff6f281d9", 00:24:47.777 "is_configured": true, 00:24:47.777 "data_offset": 2048, 00:24:47.777 "data_size": 63488 00:24:47.777 }, 00:24:47.777 { 00:24:47.777 "name": "BaseBdev4", 00:24:47.777 "uuid": "9aec0ffe-58d2-493c-bad4-70bdfc376440", 00:24:47.777 "is_configured": true, 00:24:47.777 "data_offset": 2048, 00:24:47.777 "data_size": 63488 00:24:47.777 } 00:24:47.777 ] 00:24:47.777 }' 00:24:47.777 10:50:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.777 10:50:14 -- common/autotest_common.sh@10 -- # set +x 00:24:48.344 10:50:14 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:48.344 10:50:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:48.344 10:50:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.344 10:50:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:48.602 10:50:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:48.602 10:50:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:48.602 10:50:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:48.860 [2024-07-24 10:50:15.376223] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:48.860 [2024-07-24 10:50:15.376614] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:48.860 [2024-07-24 10:50:15.376849] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:48.860 10:50:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:48.860 10:50:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:48.860 10:50:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.860 10:50:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:49.118 10:50:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:49.118 10:50:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.118 10:50:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:49.375 [2024-07-24 10:50:15.960271] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:49.375 10:50:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:49.375 10:50:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:49.375 10:50:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.375 10:50:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:49.633 10:50:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:49.633 10:50:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:49.633 10:50:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:49.892 [2024-07-24 10:50:16.463714] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:49.892 [2024-07-24 10:50:16.464134] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state offline 00:24:49.892 10:50:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:49.892 10:50:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:49.892 10:50:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.892 10:50:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:50.178 10:50:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:50.178 10:50:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:50.178 10:50:16 -- bdev/bdev_raid.sh@287 -- # killprocess 141017 00:24:50.178 10:50:16 -- common/autotest_common.sh@926 -- # '[' -z 141017 ']' 00:24:50.178 10:50:16 -- common/autotest_common.sh@930 -- # kill -0 141017 00:24:50.178 10:50:16 -- common/autotest_common.sh@931 -- # uname 00:24:50.178 10:50:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:50.178 10:50:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141017 00:24:50.178 10:50:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:50.178 10:50:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:50.178 10:50:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141017' 00:24:50.178 killing process with pid 141017 00:24:50.178 10:50:16 -- common/autotest_common.sh@945 -- # kill 141017 00:24:50.178 10:50:16 -- common/autotest_common.sh@950 -- # wait 141017 00:24:50.178 [2024-07-24 10:50:16.813365] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:50.178 [2024-07-24 10:50:16.813498] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:50.744 00:24:50.744 real 0m15.697s 00:24:50.744 user 0m28.642s 00:24:50.744 sys 0m2.180s 00:24:50.744 10:50:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:50.744 10:50:17 -- common/autotest_common.sh@10 -- # set +x 00:24:50.744 ************************************ 00:24:50.744 END TEST raid5f_state_function_test_sb 00:24:50.744 ************************************ 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:50.744 10:50:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:24:50.744 10:50:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:50.744 10:50:17 -- common/autotest_common.sh@10 -- # set +x 00:24:50.744 ************************************ 00:24:50.744 START TEST raid5f_superblock_test 00:24:50.744 ************************************ 00:24:50.744 10:50:17 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@357 -- # raid_pid=141473 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:50.744 10:50:17 -- bdev/bdev_raid.sh@358 -- # waitforlisten 141473 /var/tmp/spdk-raid.sock 00:24:50.744 10:50:17 -- common/autotest_common.sh@819 -- # '[' -z 141473 ']' 00:24:50.744 10:50:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:50.744 10:50:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:50.744 10:50:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:50.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:50.744 10:50:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:50.744 10:50:17 -- common/autotest_common.sh@10 -- # set +x 00:24:50.744 [2024-07-24 10:50:17.282994] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:50.744 [2024-07-24 10:50:17.283441] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141473 ] 00:24:50.744 [2024-07-24 10:50:17.428320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.002 [2024-07-24 10:50:17.554881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.002 [2024-07-24 10:50:17.632532] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:51.937 10:50:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:51.937 10:50:18 -- common/autotest_common.sh@852 -- # return 0 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:51.937 malloc1 00:24:51.937 10:50:18 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:52.196 [2024-07-24 10:50:18.821423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:52.197 [2024-07-24 10:50:18.821872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.197 [2024-07-24 10:50:18.822072] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:24:52.197 [2024-07-24 10:50:18.822261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.197 [2024-07-24 10:50:18.825403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.197 [2024-07-24 10:50:18.825611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:52.197 pt1 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:52.197 10:50:18 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:52.455 malloc2 00:24:52.455 10:50:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:52.714 [2024-07-24 10:50:19.293184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:52.714 [2024-07-24 10:50:19.293591] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.714 [2024-07-24 10:50:19.293754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:24:52.714 [2024-07-24 10:50:19.293905] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.714 [2024-07-24 10:50:19.296656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.714 [2024-07-24 10:50:19.296841] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:52.714 pt2 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:52.714 10:50:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:52.972 malloc3 00:24:52.972 10:50:19 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:53.231 [2024-07-24 10:50:19.790487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:53.231 [2024-07-24 10:50:19.790871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.231 [2024-07-24 10:50:19.791049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:53.231 [2024-07-24 10:50:19.791204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.231 [2024-07-24 10:50:19.794096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.231 [2024-07-24 10:50:19.794299] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:53.231 pt3 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:53.231 10:50:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:53.489 malloc4 00:24:53.489 10:50:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:53.747 [2024-07-24 10:50:20.309955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:53.747 [2024-07-24 10:50:20.310378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.747 [2024-07-24 10:50:20.310566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:53.747 [2024-07-24 10:50:20.310731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.747 [2024-07-24 10:50:20.313769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.747 [2024-07-24 10:50:20.313972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:53.747 pt4 00:24:53.747 10:50:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:53.747 10:50:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:53.747 10:50:20 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:54.005 [2024-07-24 10:50:20.546594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:54.005 [2024-07-24 10:50:20.549332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:54.005 [2024-07-24 10:50:20.549575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:54.005 [2024-07-24 10:50:20.549750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:54.005 [2024-07-24 10:50:20.550190] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:24:54.005 [2024-07-24 10:50:20.550332] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:54.005 [2024-07-24 10:50:20.550627] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:24:54.005 [2024-07-24 10:50:20.551734] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:24:54.005 [2024-07-24 10:50:20.551874] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:24:54.005 [2024-07-24 10:50:20.552182] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.005 10:50:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:54.263 10:50:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.263 "name": "raid_bdev1", 00:24:54.263 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:24:54.263 "strip_size_kb": 64, 00:24:54.263 "state": "online", 00:24:54.263 "raid_level": "raid5f", 00:24:54.263 "superblock": true, 00:24:54.263 "num_base_bdevs": 4, 00:24:54.263 "num_base_bdevs_discovered": 4, 00:24:54.263 "num_base_bdevs_operational": 4, 00:24:54.263 "base_bdevs_list": [ 00:24:54.263 { 00:24:54.263 "name": "pt1", 00:24:54.263 "uuid": "0903c4dd-bfe0-5bac-9344-1cc74685641c", 00:24:54.263 "is_configured": true, 00:24:54.263 "data_offset": 2048, 00:24:54.263 "data_size": 63488 00:24:54.263 }, 00:24:54.263 { 00:24:54.263 "name": "pt2", 00:24:54.263 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:24:54.263 "is_configured": true, 00:24:54.263 "data_offset": 2048, 00:24:54.263 "data_size": 63488 00:24:54.263 }, 00:24:54.263 { 00:24:54.263 "name": "pt3", 00:24:54.263 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:24:54.263 "is_configured": true, 00:24:54.263 "data_offset": 2048, 00:24:54.263 "data_size": 63488 00:24:54.263 }, 00:24:54.263 { 00:24:54.263 "name": "pt4", 00:24:54.263 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:24:54.263 "is_configured": true, 00:24:54.263 "data_offset": 2048, 00:24:54.263 "data_size": 63488 00:24:54.263 } 00:24:54.263 ] 00:24:54.263 }' 00:24:54.263 10:50:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.263 10:50:20 -- common/autotest_common.sh@10 -- # set +x 00:24:54.829 10:50:21 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:54.829 10:50:21 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:55.088 [2024-07-24 10:50:21.752614] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.088 10:50:21 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=94e76df3-722a-4c49-975b-ca396fc12db8 00:24:55.088 10:50:21 -- bdev/bdev_raid.sh@380 -- # '[' -z 94e76df3-722a-4c49-975b-ca396fc12db8 ']' 00:24:55.088 10:50:21 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:55.346 [2024-07-24 10:50:22.024611] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:55.346 [2024-07-24 10:50:22.024958] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:55.346 [2024-07-24 10:50:22.025271] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:55.346 [2024-07-24 10:50:22.025539] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:55.346 [2024-07-24 10:50:22.025673] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:24:55.604 10:50:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:55.604 10:50:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.862 10:50:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:55.862 10:50:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:55.862 10:50:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:55.862 10:50:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:55.862 10:50:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:55.862 10:50:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:56.120 10:50:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:56.120 10:50:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:56.384 10:50:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:56.384 10:50:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:56.643 10:50:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:56.643 10:50:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:56.902 10:50:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:56.902 10:50:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:56.902 10:50:23 -- common/autotest_common.sh@640 -- # local es=0 00:24:56.902 10:50:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:56.902 10:50:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.902 10:50:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:56.902 10:50:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.902 10:50:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:56.902 10:50:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.902 10:50:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:56.902 10:50:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:56.902 10:50:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:56.902 10:50:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:57.160 [2024-07-24 10:50:23.732990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:57.160 [2024-07-24 10:50:23.735492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:57.160 [2024-07-24 10:50:23.735732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:57.160 [2024-07-24 10:50:23.735915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:57.160 [2024-07-24 10:50:23.736095] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:57.160 [2024-07-24 10:50:23.736316] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:57.160 [2024-07-24 10:50:23.736473] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:57.160 [2024-07-24 10:50:23.736647] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:24:57.160 [2024-07-24 10:50:23.736808] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:57.160 [2024-07-24 10:50:23.736931] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state configuring 00:24:57.160 request: 00:24:57.160 { 00:24:57.160 "name": "raid_bdev1", 00:24:57.160 "raid_level": "raid5f", 00:24:57.160 "base_bdevs": [ 00:24:57.160 "malloc1", 00:24:57.160 "malloc2", 00:24:57.160 "malloc3", 00:24:57.160 "malloc4" 00:24:57.160 ], 00:24:57.160 "superblock": false, 00:24:57.160 "strip_size_kb": 64, 00:24:57.160 "method": "bdev_raid_create", 00:24:57.160 "req_id": 1 00:24:57.160 } 00:24:57.160 Got JSON-RPC error response 00:24:57.160 response: 00:24:57.160 { 00:24:57.160 "code": -17, 00:24:57.160 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:57.160 } 00:24:57.160 10:50:23 -- common/autotest_common.sh@643 -- # es=1 00:24:57.160 10:50:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:57.160 10:50:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:57.160 10:50:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:57.160 10:50:23 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.160 10:50:23 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:57.418 10:50:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:57.418 10:50:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:57.418 10:50:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:57.678 [2024-07-24 10:50:24.297527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:57.678 [2024-07-24 10:50:24.297866] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:57.678 [2024-07-24 10:50:24.298039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:57.678 [2024-07-24 10:50:24.298232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:57.678 [2024-07-24 10:50:24.301331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:57.678 [2024-07-24 10:50:24.301561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:57.678 [2024-07-24 10:50:24.301833] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:57.678 [2024-07-24 10:50:24.302033] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:57.678 pt1 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:57.678 10:50:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.936 10:50:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.936 "name": "raid_bdev1", 00:24:57.936 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:24:57.936 "strip_size_kb": 64, 00:24:57.936 "state": "configuring", 00:24:57.936 "raid_level": "raid5f", 00:24:57.936 "superblock": true, 00:24:57.936 "num_base_bdevs": 4, 00:24:57.936 "num_base_bdevs_discovered": 1, 00:24:57.936 "num_base_bdevs_operational": 4, 00:24:57.936 "base_bdevs_list": [ 00:24:57.936 { 00:24:57.936 "name": "pt1", 00:24:57.936 "uuid": "0903c4dd-bfe0-5bac-9344-1cc74685641c", 00:24:57.936 "is_configured": true, 00:24:57.936 "data_offset": 2048, 00:24:57.936 "data_size": 63488 00:24:57.936 }, 00:24:57.936 { 00:24:57.936 "name": null, 00:24:57.936 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:24:57.936 "is_configured": false, 00:24:57.936 "data_offset": 2048, 00:24:57.936 "data_size": 63488 00:24:57.936 }, 00:24:57.936 { 00:24:57.936 "name": null, 00:24:57.936 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:24:57.936 "is_configured": false, 00:24:57.936 "data_offset": 2048, 00:24:57.936 "data_size": 63488 00:24:57.936 }, 00:24:57.936 { 00:24:57.936 "name": null, 00:24:57.936 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:24:57.936 "is_configured": false, 00:24:57.936 "data_offset": 2048, 00:24:57.936 "data_size": 63488 00:24:57.936 } 00:24:57.936 ] 00:24:57.936 }' 00:24:57.936 10:50:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.936 10:50:24 -- common/autotest_common.sh@10 -- # set +x 00:24:58.872 10:50:25 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:24:58.872 10:50:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:58.872 [2024-07-24 10:50:25.546346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:58.872 [2024-07-24 10:50:25.546715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.872 [2024-07-24 10:50:25.546808] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:58.872 [2024-07-24 10:50:25.547137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.872 [2024-07-24 10:50:25.547790] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.872 [2024-07-24 10:50:25.547983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:58.872 [2024-07-24 10:50:25.548263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:58.872 [2024-07-24 10:50:25.548410] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:58.872 pt2 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:59.130 [2024-07-24 10:50:25.790466] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.130 10:50:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.697 10:50:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.697 "name": "raid_bdev1", 00:24:59.697 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:24:59.697 "strip_size_kb": 64, 00:24:59.697 "state": "configuring", 00:24:59.697 "raid_level": "raid5f", 00:24:59.697 "superblock": true, 00:24:59.697 "num_base_bdevs": 4, 00:24:59.697 "num_base_bdevs_discovered": 1, 00:24:59.697 "num_base_bdevs_operational": 4, 00:24:59.697 "base_bdevs_list": [ 00:24:59.697 { 00:24:59.697 "name": "pt1", 00:24:59.697 "uuid": "0903c4dd-bfe0-5bac-9344-1cc74685641c", 00:24:59.697 "is_configured": true, 00:24:59.697 "data_offset": 2048, 00:24:59.697 "data_size": 63488 00:24:59.697 }, 00:24:59.697 { 00:24:59.697 "name": null, 00:24:59.697 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:24:59.697 "is_configured": false, 00:24:59.697 "data_offset": 2048, 00:24:59.697 "data_size": 63488 00:24:59.697 }, 00:24:59.697 { 00:24:59.697 "name": null, 00:24:59.697 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:24:59.697 "is_configured": false, 00:24:59.697 "data_offset": 2048, 00:24:59.697 "data_size": 63488 00:24:59.697 }, 00:24:59.697 { 00:24:59.697 "name": null, 00:24:59.697 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:24:59.697 "is_configured": false, 00:24:59.697 "data_offset": 2048, 00:24:59.697 "data_size": 63488 00:24:59.697 } 00:24:59.697 ] 00:24:59.697 }' 00:24:59.697 10:50:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.697 10:50:26 -- common/autotest_common.sh@10 -- # set +x 00:25:00.264 10:50:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:00.264 10:50:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:00.264 10:50:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:00.523 [2024-07-24 10:50:27.046816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:00.523 [2024-07-24 10:50:27.047231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.523 [2024-07-24 10:50:27.047426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:00.523 [2024-07-24 10:50:27.047607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.523 [2024-07-24 10:50:27.048210] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.523 [2024-07-24 10:50:27.048396] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:00.523 [2024-07-24 10:50:27.048638] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:00.523 [2024-07-24 10:50:27.048786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:00.523 pt2 00:25:00.523 10:50:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:00.523 10:50:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:00.523 10:50:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:00.782 [2024-07-24 10:50:27.370923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:00.782 [2024-07-24 10:50:27.371363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.782 [2024-07-24 10:50:27.371478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:00.782 [2024-07-24 10:50:27.371792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.782 [2024-07-24 10:50:27.372402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.782 [2024-07-24 10:50:27.372598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:00.782 [2024-07-24 10:50:27.372829] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:00.782 [2024-07-24 10:50:27.372975] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:00.782 pt3 00:25:00.782 10:50:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:00.782 10:50:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:00.782 10:50:27 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:01.040 [2024-07-24 10:50:27.654994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:01.040 [2024-07-24 10:50:27.655452] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.040 [2024-07-24 10:50:27.655652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:01.040 [2024-07-24 10:50:27.655834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.040 [2024-07-24 10:50:27.656535] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.040 [2024-07-24 10:50:27.656736] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:01.040 [2024-07-24 10:50:27.656966] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:01.040 [2024-07-24 10:50:27.657109] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:01.040 [2024-07-24 10:50:27.657429] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:01.040 [2024-07-24 10:50:27.657556] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:01.040 [2024-07-24 10:50:27.657687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:25:01.040 [2024-07-24 10:50:27.658553] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:01.040 [2024-07-24 10:50:27.658690] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:01.040 [2024-07-24 10:50:27.658992] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.040 pt4 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.040 10:50:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.310 10:50:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:01.310 "name": "raid_bdev1", 00:25:01.310 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:01.310 "strip_size_kb": 64, 00:25:01.310 "state": "online", 00:25:01.310 "raid_level": "raid5f", 00:25:01.311 "superblock": true, 00:25:01.311 "num_base_bdevs": 4, 00:25:01.311 "num_base_bdevs_discovered": 4, 00:25:01.311 "num_base_bdevs_operational": 4, 00:25:01.311 "base_bdevs_list": [ 00:25:01.311 { 00:25:01.311 "name": "pt1", 00:25:01.311 "uuid": "0903c4dd-bfe0-5bac-9344-1cc74685641c", 00:25:01.311 "is_configured": true, 00:25:01.311 "data_offset": 2048, 00:25:01.311 "data_size": 63488 00:25:01.311 }, 00:25:01.311 { 00:25:01.311 "name": "pt2", 00:25:01.311 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:01.311 "is_configured": true, 00:25:01.311 "data_offset": 2048, 00:25:01.311 "data_size": 63488 00:25:01.311 }, 00:25:01.311 { 00:25:01.311 "name": "pt3", 00:25:01.311 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:01.311 "is_configured": true, 00:25:01.311 "data_offset": 2048, 00:25:01.311 "data_size": 63488 00:25:01.311 }, 00:25:01.311 { 00:25:01.311 "name": "pt4", 00:25:01.311 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:01.311 "is_configured": true, 00:25:01.311 "data_offset": 2048, 00:25:01.311 "data_size": 63488 00:25:01.311 } 00:25:01.311 ] 00:25:01.311 }' 00:25:01.311 10:50:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:01.311 10:50:27 -- common/autotest_common.sh@10 -- # set +x 00:25:02.261 10:50:28 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:02.261 10:50:28 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:02.519 [2024-07-24 10:50:28.951615] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.519 10:50:28 -- bdev/bdev_raid.sh@430 -- # '[' 94e76df3-722a-4c49-975b-ca396fc12db8 '!=' 94e76df3-722a-4c49-975b-ca396fc12db8 ']' 00:25:02.519 10:50:28 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:02.519 10:50:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:02.519 10:50:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:02.519 10:50:28 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:02.778 [2024-07-24 10:50:29.263602] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:02.778 10:50:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.036 10:50:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.036 "name": "raid_bdev1", 00:25:03.036 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:03.036 "strip_size_kb": 64, 00:25:03.036 "state": "online", 00:25:03.036 "raid_level": "raid5f", 00:25:03.036 "superblock": true, 00:25:03.036 "num_base_bdevs": 4, 00:25:03.036 "num_base_bdevs_discovered": 3, 00:25:03.036 "num_base_bdevs_operational": 3, 00:25:03.036 "base_bdevs_list": [ 00:25:03.036 { 00:25:03.036 "name": null, 00:25:03.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.036 "is_configured": false, 00:25:03.036 "data_offset": 2048, 00:25:03.036 "data_size": 63488 00:25:03.036 }, 00:25:03.036 { 00:25:03.036 "name": "pt2", 00:25:03.036 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:03.036 "is_configured": true, 00:25:03.036 "data_offset": 2048, 00:25:03.036 "data_size": 63488 00:25:03.036 }, 00:25:03.036 { 00:25:03.036 "name": "pt3", 00:25:03.036 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:03.036 "is_configured": true, 00:25:03.036 "data_offset": 2048, 00:25:03.036 "data_size": 63488 00:25:03.036 }, 00:25:03.036 { 00:25:03.036 "name": "pt4", 00:25:03.036 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:03.036 "is_configured": true, 00:25:03.036 "data_offset": 2048, 00:25:03.036 "data_size": 63488 00:25:03.036 } 00:25:03.036 ] 00:25:03.036 }' 00:25:03.036 10:50:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.036 10:50:29 -- common/autotest_common.sh@10 -- # set +x 00:25:03.603 10:50:30 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:03.861 [2024-07-24 10:50:30.499879] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:03.861 [2024-07-24 10:50:30.500208] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:03.861 [2024-07-24 10:50:30.500427] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:03.861 [2024-07-24 10:50:30.500628] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:03.861 [2024-07-24 10:50:30.500742] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:03.861 10:50:30 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.861 10:50:30 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:04.428 10:50:30 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:04.428 10:50:30 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:04.428 10:50:30 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:04.428 10:50:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:04.428 10:50:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:04.428 10:50:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:04.428 10:50:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:04.428 10:50:31 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:04.687 10:50:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:04.687 10:50:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:04.687 10:50:31 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:04.945 10:50:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:04.945 10:50:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:04.945 10:50:31 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:04.945 10:50:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:04.945 10:50:31 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:05.204 [2024-07-24 10:50:31.724071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:05.204 [2024-07-24 10:50:31.724454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.204 [2024-07-24 10:50:31.724628] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:05.204 [2024-07-24 10:50:31.724804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.204 [2024-07-24 10:50:31.727494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.204 [2024-07-24 10:50:31.727725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:05.204 [2024-07-24 10:50:31.727969] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:05.204 [2024-07-24 10:50:31.728128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:05.204 pt2 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.204 10:50:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.462 10:50:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.462 "name": "raid_bdev1", 00:25:05.462 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:05.462 "strip_size_kb": 64, 00:25:05.462 "state": "configuring", 00:25:05.462 "raid_level": "raid5f", 00:25:05.462 "superblock": true, 00:25:05.462 "num_base_bdevs": 4, 00:25:05.462 "num_base_bdevs_discovered": 1, 00:25:05.462 "num_base_bdevs_operational": 3, 00:25:05.462 "base_bdevs_list": [ 00:25:05.462 { 00:25:05.462 "name": null, 00:25:05.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.462 "is_configured": false, 00:25:05.462 "data_offset": 2048, 00:25:05.462 "data_size": 63488 00:25:05.462 }, 00:25:05.462 { 00:25:05.462 "name": "pt2", 00:25:05.462 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:05.462 "is_configured": true, 00:25:05.462 "data_offset": 2048, 00:25:05.462 "data_size": 63488 00:25:05.462 }, 00:25:05.462 { 00:25:05.462 "name": null, 00:25:05.462 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:05.462 "is_configured": false, 00:25:05.462 "data_offset": 2048, 00:25:05.462 "data_size": 63488 00:25:05.462 }, 00:25:05.462 { 00:25:05.462 "name": null, 00:25:05.462 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:05.462 "is_configured": false, 00:25:05.462 "data_offset": 2048, 00:25:05.462 "data_size": 63488 00:25:05.462 } 00:25:05.462 ] 00:25:05.462 }' 00:25:05.462 10:50:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:05.462 10:50:32 -- common/autotest_common.sh@10 -- # set +x 00:25:06.028 10:50:32 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:06.028 10:50:32 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:06.028 10:50:32 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:06.310 [2024-07-24 10:50:32.972422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:06.310 [2024-07-24 10:50:32.972818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.310 [2024-07-24 10:50:32.973037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:06.310 [2024-07-24 10:50:32.973187] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.310 [2024-07-24 10:50:32.973722] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.310 [2024-07-24 10:50:32.973899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:06.310 [2024-07-24 10:50:32.974120] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:06.310 [2024-07-24 10:50:32.974254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:06.310 pt3 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.310 10:50:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.568 10:50:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.568 10:50:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.568 10:50:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.568 10:50:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.836 10:50:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.837 "name": "raid_bdev1", 00:25:06.837 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:06.837 "strip_size_kb": 64, 00:25:06.837 "state": "configuring", 00:25:06.837 "raid_level": "raid5f", 00:25:06.837 "superblock": true, 00:25:06.837 "num_base_bdevs": 4, 00:25:06.837 "num_base_bdevs_discovered": 2, 00:25:06.837 "num_base_bdevs_operational": 3, 00:25:06.837 "base_bdevs_list": [ 00:25:06.837 { 00:25:06.837 "name": null, 00:25:06.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.837 "is_configured": false, 00:25:06.837 "data_offset": 2048, 00:25:06.837 "data_size": 63488 00:25:06.837 }, 00:25:06.837 { 00:25:06.837 "name": "pt2", 00:25:06.837 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:06.837 "is_configured": true, 00:25:06.837 "data_offset": 2048, 00:25:06.837 "data_size": 63488 00:25:06.837 }, 00:25:06.837 { 00:25:06.837 "name": "pt3", 00:25:06.837 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:06.837 "is_configured": true, 00:25:06.837 "data_offset": 2048, 00:25:06.837 "data_size": 63488 00:25:06.837 }, 00:25:06.837 { 00:25:06.837 "name": null, 00:25:06.837 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:06.837 "is_configured": false, 00:25:06.837 "data_offset": 2048, 00:25:06.837 "data_size": 63488 00:25:06.837 } 00:25:06.837 ] 00:25:06.837 }' 00:25:06.837 10:50:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.837 10:50:33 -- common/autotest_common.sh@10 -- # set +x 00:25:07.403 10:50:33 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:07.403 10:50:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:07.403 10:50:33 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:07.403 10:50:33 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:07.661 [2024-07-24 10:50:34.248694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:07.661 [2024-07-24 10:50:34.249085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.662 [2024-07-24 10:50:34.249301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:07.662 [2024-07-24 10:50:34.249462] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.662 [2024-07-24 10:50:34.250090] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.662 [2024-07-24 10:50:34.250265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:07.662 [2024-07-24 10:50:34.250486] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:07.662 [2024-07-24 10:50:34.250632] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:07.662 [2024-07-24 10:50:34.250919] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:25:07.662 [2024-07-24 10:50:34.251050] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:07.662 [2024-07-24 10:50:34.251243] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002c80 00:25:07.662 [2024-07-24 10:50:34.252277] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:25:07.662 [2024-07-24 10:50:34.252417] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:25:07.662 [2024-07-24 10:50:34.252810] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:07.662 pt4 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.662 10:50:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.920 10:50:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:07.920 "name": "raid_bdev1", 00:25:07.920 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:07.920 "strip_size_kb": 64, 00:25:07.920 "state": "online", 00:25:07.920 "raid_level": "raid5f", 00:25:07.920 "superblock": true, 00:25:07.920 "num_base_bdevs": 4, 00:25:07.920 "num_base_bdevs_discovered": 3, 00:25:07.920 "num_base_bdevs_operational": 3, 00:25:07.920 "base_bdevs_list": [ 00:25:07.920 { 00:25:07.920 "name": null, 00:25:07.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:07.920 "is_configured": false, 00:25:07.920 "data_offset": 2048, 00:25:07.920 "data_size": 63488 00:25:07.920 }, 00:25:07.920 { 00:25:07.920 "name": "pt2", 00:25:07.920 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:07.920 "is_configured": true, 00:25:07.920 "data_offset": 2048, 00:25:07.920 "data_size": 63488 00:25:07.920 }, 00:25:07.920 { 00:25:07.920 "name": "pt3", 00:25:07.920 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:07.920 "is_configured": true, 00:25:07.920 "data_offset": 2048, 00:25:07.920 "data_size": 63488 00:25:07.920 }, 00:25:07.920 { 00:25:07.920 "name": "pt4", 00:25:07.920 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:07.920 "is_configured": true, 00:25:07.920 "data_offset": 2048, 00:25:07.920 "data_size": 63488 00:25:07.920 } 00:25:07.920 ] 00:25:07.920 }' 00:25:07.920 10:50:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:07.920 10:50:34 -- common/autotest_common.sh@10 -- # set +x 00:25:08.855 10:50:35 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:08.855 10:50:35 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:08.855 [2024-07-24 10:50:35.509022] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:08.855 [2024-07-24 10:50:35.509356] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.855 [2024-07-24 10:50:35.509578] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.855 [2024-07-24 10:50:35.509789] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.855 [2024-07-24 10:50:35.509918] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:25:08.855 10:50:35 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.855 10:50:35 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:09.420 10:50:35 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:09.420 10:50:35 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:09.420 10:50:35 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:09.678 [2024-07-24 10:50:36.141155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:09.678 [2024-07-24 10:50:36.141592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.678 [2024-07-24 10:50:36.141692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:09.678 [2024-07-24 10:50:36.142023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.678 [2024-07-24 10:50:36.144831] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.678 [2024-07-24 10:50:36.145049] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:09.678 [2024-07-24 10:50:36.145282] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:09.678 [2024-07-24 10:50:36.145459] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:09.678 pt1 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.678 10:50:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.936 10:50:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.936 "name": "raid_bdev1", 00:25:09.936 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:09.936 "strip_size_kb": 64, 00:25:09.936 "state": "configuring", 00:25:09.936 "raid_level": "raid5f", 00:25:09.936 "superblock": true, 00:25:09.936 "num_base_bdevs": 4, 00:25:09.936 "num_base_bdevs_discovered": 1, 00:25:09.936 "num_base_bdevs_operational": 4, 00:25:09.936 "base_bdevs_list": [ 00:25:09.936 { 00:25:09.936 "name": "pt1", 00:25:09.936 "uuid": "0903c4dd-bfe0-5bac-9344-1cc74685641c", 00:25:09.936 "is_configured": true, 00:25:09.936 "data_offset": 2048, 00:25:09.936 "data_size": 63488 00:25:09.936 }, 00:25:09.936 { 00:25:09.936 "name": null, 00:25:09.936 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:09.936 "is_configured": false, 00:25:09.936 "data_offset": 2048, 00:25:09.936 "data_size": 63488 00:25:09.936 }, 00:25:09.936 { 00:25:09.936 "name": null, 00:25:09.936 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:09.936 "is_configured": false, 00:25:09.936 "data_offset": 2048, 00:25:09.936 "data_size": 63488 00:25:09.936 }, 00:25:09.936 { 00:25:09.936 "name": null, 00:25:09.936 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:09.936 "is_configured": false, 00:25:09.936 "data_offset": 2048, 00:25:09.936 "data_size": 63488 00:25:09.936 } 00:25:09.936 ] 00:25:09.936 }' 00:25:09.936 10:50:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.936 10:50:36 -- common/autotest_common.sh@10 -- # set +x 00:25:10.869 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:10.869 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:10.869 10:50:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:10.869 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:10.869 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:10.869 10:50:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:11.126 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:11.126 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:11.126 10:50:37 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:11.383 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:11.383 10:50:37 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:11.383 10:50:37 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:11.383 10:50:37 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:11.641 [2024-07-24 10:50:38.198009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:11.641 [2024-07-24 10:50:38.198448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.641 [2024-07-24 10:50:38.198646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:11.641 [2024-07-24 10:50:38.198786] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.641 [2024-07-24 10:50:38.199443] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.641 [2024-07-24 10:50:38.199669] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:11.641 [2024-07-24 10:50:38.199890] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:11.641 [2024-07-24 10:50:38.200016] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:11.641 [2024-07-24 10:50:38.200126] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:11.641 [2024-07-24 10:50:38.200204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:25:11.642 [2024-07-24 10:50:38.200468] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:11.642 pt4 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.642 10:50:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.900 10:50:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:11.900 "name": "raid_bdev1", 00:25:11.900 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:11.900 "strip_size_kb": 64, 00:25:11.900 "state": "configuring", 00:25:11.900 "raid_level": "raid5f", 00:25:11.900 "superblock": true, 00:25:11.900 "num_base_bdevs": 4, 00:25:11.900 "num_base_bdevs_discovered": 1, 00:25:11.900 "num_base_bdevs_operational": 3, 00:25:11.900 "base_bdevs_list": [ 00:25:11.900 { 00:25:11.900 "name": null, 00:25:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:11.900 "is_configured": false, 00:25:11.900 "data_offset": 2048, 00:25:11.900 "data_size": 63488 00:25:11.900 }, 00:25:11.900 { 00:25:11.900 "name": null, 00:25:11.900 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:11.900 "is_configured": false, 00:25:11.900 "data_offset": 2048, 00:25:11.900 "data_size": 63488 00:25:11.900 }, 00:25:11.900 { 00:25:11.900 "name": null, 00:25:11.900 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:11.900 "is_configured": false, 00:25:11.900 "data_offset": 2048, 00:25:11.900 "data_size": 63488 00:25:11.900 }, 00:25:11.900 { 00:25:11.900 "name": "pt4", 00:25:11.900 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:11.900 "is_configured": true, 00:25:11.900 "data_offset": 2048, 00:25:11.900 "data_size": 63488 00:25:11.900 } 00:25:11.900 ] 00:25:11.900 }' 00:25:11.900 10:50:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:11.900 10:50:38 -- common/autotest_common.sh@10 -- # set +x 00:25:12.466 10:50:39 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:12.466 10:50:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:12.466 10:50:39 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:12.724 [2024-07-24 10:50:39.288185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:12.724 [2024-07-24 10:50:39.288643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.724 [2024-07-24 10:50:39.288860] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:12.724 [2024-07-24 10:50:39.289013] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.724 [2024-07-24 10:50:39.289698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.724 [2024-07-24 10:50:39.289893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:12.724 [2024-07-24 10:50:39.290133] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:12.724 [2024-07-24 10:50:39.290280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:12.724 pt2 00:25:12.724 10:50:39 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:12.724 10:50:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:12.724 10:50:39 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:12.981 [2024-07-24 10:50:39.524255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:12.981 [2024-07-24 10:50:39.524575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.981 [2024-07-24 10:50:39.524759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:12.981 [2024-07-24 10:50:39.524913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.981 [2024-07-24 10:50:39.525593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.981 [2024-07-24 10:50:39.525794] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:12.981 [2024-07-24 10:50:39.526009] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:12.981 [2024-07-24 10:50:39.526131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:12.981 [2024-07-24 10:50:39.526431] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:25:12.981 [2024-07-24 10:50:39.526553] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:12.981 [2024-07-24 10:50:39.526750] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:25:12.981 [2024-07-24 10:50:39.527873] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:25:12.981 [2024-07-24 10:50:39.528026] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:25:12.981 [2024-07-24 10:50:39.528388] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.981 pt3 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.981 10:50:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.239 10:50:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.239 "name": "raid_bdev1", 00:25:13.239 "uuid": "94e76df3-722a-4c49-975b-ca396fc12db8", 00:25:13.239 "strip_size_kb": 64, 00:25:13.239 "state": "online", 00:25:13.239 "raid_level": "raid5f", 00:25:13.239 "superblock": true, 00:25:13.239 "num_base_bdevs": 4, 00:25:13.239 "num_base_bdevs_discovered": 3, 00:25:13.239 "num_base_bdevs_operational": 3, 00:25:13.239 "base_bdevs_list": [ 00:25:13.239 { 00:25:13.239 "name": null, 00:25:13.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.239 "is_configured": false, 00:25:13.239 "data_offset": 2048, 00:25:13.239 "data_size": 63488 00:25:13.239 }, 00:25:13.239 { 00:25:13.239 "name": "pt2", 00:25:13.239 "uuid": "4a74e7a7-57e8-5c43-908a-53233c750e3d", 00:25:13.239 "is_configured": true, 00:25:13.239 "data_offset": 2048, 00:25:13.239 "data_size": 63488 00:25:13.239 }, 00:25:13.239 { 00:25:13.239 "name": "pt3", 00:25:13.239 "uuid": "24af951f-c183-5590-8179-025c83a18777", 00:25:13.239 "is_configured": true, 00:25:13.239 "data_offset": 2048, 00:25:13.239 "data_size": 63488 00:25:13.239 }, 00:25:13.239 { 00:25:13.239 "name": "pt4", 00:25:13.239 "uuid": "4b5d30ae-8f61-5c6c-a355-787510565ddd", 00:25:13.239 "is_configured": true, 00:25:13.239 "data_offset": 2048, 00:25:13.239 "data_size": 63488 00:25:13.239 } 00:25:13.239 ] 00:25:13.239 }' 00:25:13.239 10:50:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.239 10:50:39 -- common/autotest_common.sh@10 -- # set +x 00:25:14.172 10:50:40 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:14.172 10:50:40 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:14.430 [2024-07-24 10:50:40.876907] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:14.430 10:50:40 -- bdev/bdev_raid.sh@506 -- # '[' 94e76df3-722a-4c49-975b-ca396fc12db8 '!=' 94e76df3-722a-4c49-975b-ca396fc12db8 ']' 00:25:14.430 10:50:40 -- bdev/bdev_raid.sh@511 -- # killprocess 141473 00:25:14.430 10:50:40 -- common/autotest_common.sh@926 -- # '[' -z 141473 ']' 00:25:14.430 10:50:40 -- common/autotest_common.sh@930 -- # kill -0 141473 00:25:14.430 10:50:40 -- common/autotest_common.sh@931 -- # uname 00:25:14.430 10:50:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:14.430 10:50:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141473 00:25:14.430 killing process with pid 141473 00:25:14.430 10:50:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:14.430 10:50:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:14.430 10:50:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141473' 00:25:14.430 10:50:40 -- common/autotest_common.sh@945 -- # kill 141473 00:25:14.430 10:50:40 -- common/autotest_common.sh@950 -- # wait 141473 00:25:14.430 [2024-07-24 10:50:40.918934] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:14.430 [2024-07-24 10:50:40.919086] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.430 [2024-07-24 10:50:40.919190] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.430 [2024-07-24 10:50:40.919332] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:25:14.430 [2024-07-24 10:50:40.997844] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:14.688 10:50:41 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:14.688 00:25:14.688 real 0m24.113s 00:25:14.688 user 0m45.221s 00:25:14.688 sys 0m2.941s 00:25:14.688 10:50:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.688 ************************************ 00:25:14.688 END TEST raid5f_superblock_test 00:25:14.688 10:50:41 -- common/autotest_common.sh@10 -- # set +x 00:25:14.688 ************************************ 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:14.946 10:50:41 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:14.946 10:50:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:14.946 10:50:41 -- common/autotest_common.sh@10 -- # set +x 00:25:14.946 ************************************ 00:25:14.946 START TEST raid5f_rebuild_test 00:25:14.946 ************************************ 00:25:14.946 10:50:41 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@544 -- # raid_pid=142167 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142167 /var/tmp/spdk-raid.sock 00:25:14.946 10:50:41 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:14.946 10:50:41 -- common/autotest_common.sh@819 -- # '[' -z 142167 ']' 00:25:14.946 10:50:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:14.946 10:50:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:14.946 10:50:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:14.946 10:50:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.946 10:50:41 -- common/autotest_common.sh@10 -- # set +x 00:25:14.946 [2024-07-24 10:50:41.478362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:14.946 [2024-07-24 10:50:41.478843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142167 ] 00:25:14.946 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:14.946 Zero copy mechanism will not be used. 00:25:14.946 [2024-07-24 10:50:41.621406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.204 [2024-07-24 10:50:41.746224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.204 [2024-07-24 10:50:41.820957] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:16.136 10:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:16.136 10:50:42 -- common/autotest_common.sh@852 -- # return 0 00:25:16.136 10:50:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:16.136 10:50:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:16.136 10:50:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:16.136 BaseBdev1 00:25:16.136 10:50:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:16.136 10:50:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:16.136 10:50:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:16.394 BaseBdev2 00:25:16.394 10:50:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:16.394 10:50:42 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:16.394 10:50:42 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:16.652 BaseBdev3 00:25:16.652 10:50:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:16.652 10:50:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:16.652 10:50:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:16.910 BaseBdev4 00:25:16.910 10:50:43 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:17.167 spare_malloc 00:25:17.167 10:50:43 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:17.433 spare_delay 00:25:17.433 10:50:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:17.708 [2024-07-24 10:50:44.296613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:17.708 [2024-07-24 10:50:44.297023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.708 [2024-07-24 10:50:44.297231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:17.708 [2024-07-24 10:50:44.297431] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.708 [2024-07-24 10:50:44.300586] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.708 [2024-07-24 10:50:44.300780] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:17.708 spare 00:25:17.708 10:50:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:17.966 [2024-07-24 10:50:44.513342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:17.966 [2024-07-24 10:50:44.515931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:17.966 [2024-07-24 10:50:44.516131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:17.966 [2024-07-24 10:50:44.516232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:17.966 [2024-07-24 10:50:44.516452] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:25:17.966 [2024-07-24 10:50:44.516556] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:17.966 [2024-07-24 10:50:44.516860] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000022c0 00:25:17.966 [2024-07-24 10:50:44.517866] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:25:17.966 [2024-07-24 10:50:44.518004] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:25:17.966 [2024-07-24 10:50:44.518426] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.966 10:50:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.223 10:50:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:18.223 "name": "raid_bdev1", 00:25:18.223 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:18.223 "strip_size_kb": 64, 00:25:18.223 "state": "online", 00:25:18.223 "raid_level": "raid5f", 00:25:18.223 "superblock": false, 00:25:18.223 "num_base_bdevs": 4, 00:25:18.223 "num_base_bdevs_discovered": 4, 00:25:18.223 "num_base_bdevs_operational": 4, 00:25:18.223 "base_bdevs_list": [ 00:25:18.223 { 00:25:18.223 "name": "BaseBdev1", 00:25:18.224 "uuid": "654af368-170a-433c-bae8-aafd6ebc0f8f", 00:25:18.224 "is_configured": true, 00:25:18.224 "data_offset": 0, 00:25:18.224 "data_size": 65536 00:25:18.224 }, 00:25:18.224 { 00:25:18.224 "name": "BaseBdev2", 00:25:18.224 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:18.224 "is_configured": true, 00:25:18.224 "data_offset": 0, 00:25:18.224 "data_size": 65536 00:25:18.224 }, 00:25:18.224 { 00:25:18.224 "name": "BaseBdev3", 00:25:18.224 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:18.224 "is_configured": true, 00:25:18.224 "data_offset": 0, 00:25:18.224 "data_size": 65536 00:25:18.224 }, 00:25:18.224 { 00:25:18.224 "name": "BaseBdev4", 00:25:18.224 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:18.224 "is_configured": true, 00:25:18.224 "data_offset": 0, 00:25:18.224 "data_size": 65536 00:25:18.224 } 00:25:18.224 ] 00:25:18.224 }' 00:25:18.224 10:50:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:18.224 10:50:44 -- common/autotest_common.sh@10 -- # set +x 00:25:18.789 10:50:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:18.789 10:50:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:19.049 [2024-07-24 10:50:45.659567] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:19.049 10:50:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:19.049 10:50:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:19.049 10:50:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.307 10:50:45 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:19.307 10:50:45 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:19.307 10:50:45 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:19.307 10:50:45 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@12 -- # local i 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:19.307 10:50:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:19.563 [2024-07-24 10:50:46.191762] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:25:19.563 /dev/nbd0 00:25:19.563 10:50:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:19.563 10:50:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:19.564 10:50:46 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:19.564 10:50:46 -- common/autotest_common.sh@857 -- # local i 00:25:19.564 10:50:46 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:19.564 10:50:46 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:19.564 10:50:46 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:19.564 10:50:46 -- common/autotest_common.sh@861 -- # break 00:25:19.564 10:50:46 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:19.564 10:50:46 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:19.564 10:50:46 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:19.822 1+0 records in 00:25:19.822 1+0 records out 00:25:19.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616187 s, 6.6 MB/s 00:25:19.822 10:50:46 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:19.822 10:50:46 -- common/autotest_common.sh@874 -- # size=4096 00:25:19.822 10:50:46 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:19.822 10:50:46 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:19.822 10:50:46 -- common/autotest_common.sh@877 -- # return 0 00:25:19.822 10:50:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:19.822 10:50:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:19.822 10:50:46 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:19.822 10:50:46 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:19.822 10:50:46 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:19.822 10:50:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:20.386 512+0 records in 00:25:20.386 512+0 records out 00:25:20.386 100663296 bytes (101 MB, 96 MiB) copied, 0.577433 s, 174 MB/s 00:25:20.386 10:50:46 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:20.386 10:50:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:20.386 10:50:46 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:20.386 10:50:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:20.386 10:50:46 -- bdev/nbd_common.sh@51 -- # local i 00:25:20.386 10:50:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:20.386 10:50:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:20.644 [2024-07-24 10:50:47.084182] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@41 -- # break 00:25:20.644 10:50:47 -- bdev/nbd_common.sh@45 -- # return 0 00:25:20.644 10:50:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:20.644 [2024-07-24 10:50:47.323717] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.902 10:50:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.161 10:50:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.161 "name": "raid_bdev1", 00:25:21.161 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:21.161 "strip_size_kb": 64, 00:25:21.161 "state": "online", 00:25:21.161 "raid_level": "raid5f", 00:25:21.161 "superblock": false, 00:25:21.161 "num_base_bdevs": 4, 00:25:21.161 "num_base_bdevs_discovered": 3, 00:25:21.161 "num_base_bdevs_operational": 3, 00:25:21.161 "base_bdevs_list": [ 00:25:21.161 { 00:25:21.161 "name": null, 00:25:21.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:21.161 "is_configured": false, 00:25:21.161 "data_offset": 0, 00:25:21.161 "data_size": 65536 00:25:21.161 }, 00:25:21.161 { 00:25:21.161 "name": "BaseBdev2", 00:25:21.161 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:21.161 "is_configured": true, 00:25:21.161 "data_offset": 0, 00:25:21.161 "data_size": 65536 00:25:21.161 }, 00:25:21.161 { 00:25:21.161 "name": "BaseBdev3", 00:25:21.161 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:21.161 "is_configured": true, 00:25:21.161 "data_offset": 0, 00:25:21.161 "data_size": 65536 00:25:21.161 }, 00:25:21.161 { 00:25:21.161 "name": "BaseBdev4", 00:25:21.161 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:21.161 "is_configured": true, 00:25:21.161 "data_offset": 0, 00:25:21.161 "data_size": 65536 00:25:21.161 } 00:25:21.161 ] 00:25:21.161 }' 00:25:21.161 10:50:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.161 10:50:47 -- common/autotest_common.sh@10 -- # set +x 00:25:21.727 10:50:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:21.985 [2024-07-24 10:50:48.436351] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:21.985 [2024-07-24 10:50:48.436856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:21.985 [2024-07-24 10:50:48.443291] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027a60 00:25:21.985 [2024-07-24 10:50:48.446893] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:21.985 10:50:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.922 10:50:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.181 10:50:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:23.181 "name": "raid_bdev1", 00:25:23.181 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:23.181 "strip_size_kb": 64, 00:25:23.181 "state": "online", 00:25:23.181 "raid_level": "raid5f", 00:25:23.181 "superblock": false, 00:25:23.181 "num_base_bdevs": 4, 00:25:23.181 "num_base_bdevs_discovered": 4, 00:25:23.181 "num_base_bdevs_operational": 4, 00:25:23.181 "process": { 00:25:23.181 "type": "rebuild", 00:25:23.181 "target": "spare", 00:25:23.181 "progress": { 00:25:23.181 "blocks": 23040, 00:25:23.181 "percent": 11 00:25:23.181 } 00:25:23.181 }, 00:25:23.181 "base_bdevs_list": [ 00:25:23.181 { 00:25:23.182 "name": "spare", 00:25:23.182 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:23.182 "is_configured": true, 00:25:23.182 "data_offset": 0, 00:25:23.182 "data_size": 65536 00:25:23.182 }, 00:25:23.182 { 00:25:23.182 "name": "BaseBdev2", 00:25:23.182 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:23.182 "is_configured": true, 00:25:23.182 "data_offset": 0, 00:25:23.182 "data_size": 65536 00:25:23.182 }, 00:25:23.182 { 00:25:23.182 "name": "BaseBdev3", 00:25:23.182 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:23.182 "is_configured": true, 00:25:23.182 "data_offset": 0, 00:25:23.182 "data_size": 65536 00:25:23.182 }, 00:25:23.182 { 00:25:23.182 "name": "BaseBdev4", 00:25:23.182 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:23.182 "is_configured": true, 00:25:23.182 "data_offset": 0, 00:25:23.182 "data_size": 65536 00:25:23.182 } 00:25:23.182 ] 00:25:23.182 }' 00:25:23.182 10:50:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:23.182 10:50:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:23.182 10:50:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:23.182 10:50:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:23.182 10:50:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:23.441 [2024-07-24 10:50:50.057356] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:23.442 [2024-07-24 10:50:50.064526] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:23.442 [2024-07-24 10:50:50.065015] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.442 10:50:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:23.701 10:50:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:23.701 "name": "raid_bdev1", 00:25:23.701 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:23.701 "strip_size_kb": 64, 00:25:23.701 "state": "online", 00:25:23.701 "raid_level": "raid5f", 00:25:23.701 "superblock": false, 00:25:23.701 "num_base_bdevs": 4, 00:25:23.701 "num_base_bdevs_discovered": 3, 00:25:23.701 "num_base_bdevs_operational": 3, 00:25:23.701 "base_bdevs_list": [ 00:25:23.701 { 00:25:23.701 "name": null, 00:25:23.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.701 "is_configured": false, 00:25:23.701 "data_offset": 0, 00:25:23.701 "data_size": 65536 00:25:23.701 }, 00:25:23.701 { 00:25:23.701 "name": "BaseBdev2", 00:25:23.701 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:23.701 "is_configured": true, 00:25:23.701 "data_offset": 0, 00:25:23.701 "data_size": 65536 00:25:23.701 }, 00:25:23.701 { 00:25:23.701 "name": "BaseBdev3", 00:25:23.701 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:23.701 "is_configured": true, 00:25:23.701 "data_offset": 0, 00:25:23.701 "data_size": 65536 00:25:23.701 }, 00:25:23.701 { 00:25:23.701 "name": "BaseBdev4", 00:25:23.701 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:23.701 "is_configured": true, 00:25:23.701 "data_offset": 0, 00:25:23.701 "data_size": 65536 00:25:23.701 } 00:25:23.701 ] 00:25:23.701 }' 00:25:23.962 10:50:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:23.962 10:50:50 -- common/autotest_common.sh@10 -- # set +x 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.530 10:50:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.789 10:50:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:24.789 "name": "raid_bdev1", 00:25:24.789 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:24.789 "strip_size_kb": 64, 00:25:24.789 "state": "online", 00:25:24.789 "raid_level": "raid5f", 00:25:24.789 "superblock": false, 00:25:24.789 "num_base_bdevs": 4, 00:25:24.789 "num_base_bdevs_discovered": 3, 00:25:24.789 "num_base_bdevs_operational": 3, 00:25:24.789 "base_bdevs_list": [ 00:25:24.789 { 00:25:24.789 "name": null, 00:25:24.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:24.789 "is_configured": false, 00:25:24.789 "data_offset": 0, 00:25:24.789 "data_size": 65536 00:25:24.789 }, 00:25:24.789 { 00:25:24.789 "name": "BaseBdev2", 00:25:24.789 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:24.789 "is_configured": true, 00:25:24.789 "data_offset": 0, 00:25:24.789 "data_size": 65536 00:25:24.789 }, 00:25:24.789 { 00:25:24.789 "name": "BaseBdev3", 00:25:24.789 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:24.789 "is_configured": true, 00:25:24.789 "data_offset": 0, 00:25:24.789 "data_size": 65536 00:25:24.789 }, 00:25:24.789 { 00:25:24.789 "name": "BaseBdev4", 00:25:24.789 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:24.789 "is_configured": true, 00:25:24.789 "data_offset": 0, 00:25:24.789 "data_size": 65536 00:25:24.789 } 00:25:24.789 ] 00:25:24.789 }' 00:25:24.789 10:50:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:24.789 10:50:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:24.789 10:50:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:24.789 10:50:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:24.789 10:50:51 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:25.048 [2024-07-24 10:50:51.676210] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:25.048 [2024-07-24 10:50:51.676363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:25.048 [2024-07-24 10:50:51.682779] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027c00 00:25:25.048 [2024-07-24 10:50:51.685816] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:25.048 10:50:51 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.422 10:50:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:26.422 "name": "raid_bdev1", 00:25:26.422 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:26.422 "strip_size_kb": 64, 00:25:26.422 "state": "online", 00:25:26.422 "raid_level": "raid5f", 00:25:26.422 "superblock": false, 00:25:26.422 "num_base_bdevs": 4, 00:25:26.422 "num_base_bdevs_discovered": 4, 00:25:26.423 "num_base_bdevs_operational": 4, 00:25:26.423 "process": { 00:25:26.423 "type": "rebuild", 00:25:26.423 "target": "spare", 00:25:26.423 "progress": { 00:25:26.423 "blocks": 23040, 00:25:26.423 "percent": 11 00:25:26.423 } 00:25:26.423 }, 00:25:26.423 "base_bdevs_list": [ 00:25:26.423 { 00:25:26.423 "name": "spare", 00:25:26.423 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:26.423 "is_configured": true, 00:25:26.423 "data_offset": 0, 00:25:26.423 "data_size": 65536 00:25:26.423 }, 00:25:26.423 { 00:25:26.423 "name": "BaseBdev2", 00:25:26.423 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:26.423 "is_configured": true, 00:25:26.423 "data_offset": 0, 00:25:26.423 "data_size": 65536 00:25:26.423 }, 00:25:26.423 { 00:25:26.423 "name": "BaseBdev3", 00:25:26.423 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:26.423 "is_configured": true, 00:25:26.423 "data_offset": 0, 00:25:26.423 "data_size": 65536 00:25:26.423 }, 00:25:26.423 { 00:25:26.423 "name": "BaseBdev4", 00:25:26.423 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:26.423 "is_configured": true, 00:25:26.423 "data_offset": 0, 00:25:26.423 "data_size": 65536 00:25:26.423 } 00:25:26.423 ] 00:25:26.423 }' 00:25:26.423 10:50:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@657 -- # local timeout=733 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.423 10:50:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:26.681 10:50:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:26.681 "name": "raid_bdev1", 00:25:26.681 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:26.681 "strip_size_kb": 64, 00:25:26.681 "state": "online", 00:25:26.681 "raid_level": "raid5f", 00:25:26.681 "superblock": false, 00:25:26.681 "num_base_bdevs": 4, 00:25:26.681 "num_base_bdevs_discovered": 4, 00:25:26.681 "num_base_bdevs_operational": 4, 00:25:26.681 "process": { 00:25:26.681 "type": "rebuild", 00:25:26.681 "target": "spare", 00:25:26.681 "progress": { 00:25:26.681 "blocks": 30720, 00:25:26.681 "percent": 15 00:25:26.681 } 00:25:26.681 }, 00:25:26.681 "base_bdevs_list": [ 00:25:26.681 { 00:25:26.681 "name": "spare", 00:25:26.681 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:26.681 "is_configured": true, 00:25:26.681 "data_offset": 0, 00:25:26.681 "data_size": 65536 00:25:26.681 }, 00:25:26.681 { 00:25:26.681 "name": "BaseBdev2", 00:25:26.681 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:26.681 "is_configured": true, 00:25:26.681 "data_offset": 0, 00:25:26.681 "data_size": 65536 00:25:26.681 }, 00:25:26.681 { 00:25:26.681 "name": "BaseBdev3", 00:25:26.681 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:26.681 "is_configured": true, 00:25:26.681 "data_offset": 0, 00:25:26.681 "data_size": 65536 00:25:26.681 }, 00:25:26.681 { 00:25:26.681 "name": "BaseBdev4", 00:25:26.681 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:26.681 "is_configured": true, 00:25:26.681 "data_offset": 0, 00:25:26.681 "data_size": 65536 00:25:26.681 } 00:25:26.681 ] 00:25:26.681 }' 00:25:26.681 10:50:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:26.940 10:50:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:26.940 10:50:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:26.940 10:50:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:26.940 10:50:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.875 10:50:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.133 10:50:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:28.133 "name": "raid_bdev1", 00:25:28.133 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:28.133 "strip_size_kb": 64, 00:25:28.133 "state": "online", 00:25:28.133 "raid_level": "raid5f", 00:25:28.133 "superblock": false, 00:25:28.133 "num_base_bdevs": 4, 00:25:28.133 "num_base_bdevs_discovered": 4, 00:25:28.133 "num_base_bdevs_operational": 4, 00:25:28.133 "process": { 00:25:28.133 "type": "rebuild", 00:25:28.133 "target": "spare", 00:25:28.133 "progress": { 00:25:28.133 "blocks": 57600, 00:25:28.133 "percent": 29 00:25:28.133 } 00:25:28.133 }, 00:25:28.133 "base_bdevs_list": [ 00:25:28.133 { 00:25:28.133 "name": "spare", 00:25:28.133 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:28.133 "is_configured": true, 00:25:28.133 "data_offset": 0, 00:25:28.133 "data_size": 65536 00:25:28.133 }, 00:25:28.133 { 00:25:28.133 "name": "BaseBdev2", 00:25:28.133 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:28.133 "is_configured": true, 00:25:28.133 "data_offset": 0, 00:25:28.133 "data_size": 65536 00:25:28.133 }, 00:25:28.133 { 00:25:28.133 "name": "BaseBdev3", 00:25:28.133 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:28.133 "is_configured": true, 00:25:28.134 "data_offset": 0, 00:25:28.134 "data_size": 65536 00:25:28.134 }, 00:25:28.134 { 00:25:28.134 "name": "BaseBdev4", 00:25:28.134 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:28.134 "is_configured": true, 00:25:28.134 "data_offset": 0, 00:25:28.134 "data_size": 65536 00:25:28.134 } 00:25:28.134 ] 00:25:28.134 }' 00:25:28.134 10:50:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:28.134 10:50:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:28.134 10:50:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:28.394 10:50:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:28.394 10:50:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:29.375 10:50:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:29.375 10:50:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:29.375 10:50:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:29.375 10:50:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:29.375 10:50:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:29.376 10:50:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:29.376 10:50:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.376 10:50:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.633 10:50:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:29.633 "name": "raid_bdev1", 00:25:29.633 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:29.633 "strip_size_kb": 64, 00:25:29.633 "state": "online", 00:25:29.633 "raid_level": "raid5f", 00:25:29.633 "superblock": false, 00:25:29.633 "num_base_bdevs": 4, 00:25:29.633 "num_base_bdevs_discovered": 4, 00:25:29.633 "num_base_bdevs_operational": 4, 00:25:29.633 "process": { 00:25:29.633 "type": "rebuild", 00:25:29.633 "target": "spare", 00:25:29.633 "progress": { 00:25:29.633 "blocks": 84480, 00:25:29.634 "percent": 42 00:25:29.634 } 00:25:29.634 }, 00:25:29.634 "base_bdevs_list": [ 00:25:29.634 { 00:25:29.634 "name": "spare", 00:25:29.634 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:29.634 "is_configured": true, 00:25:29.634 "data_offset": 0, 00:25:29.634 "data_size": 65536 00:25:29.634 }, 00:25:29.634 { 00:25:29.634 "name": "BaseBdev2", 00:25:29.634 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:29.634 "is_configured": true, 00:25:29.634 "data_offset": 0, 00:25:29.634 "data_size": 65536 00:25:29.634 }, 00:25:29.634 { 00:25:29.634 "name": "BaseBdev3", 00:25:29.634 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:29.634 "is_configured": true, 00:25:29.634 "data_offset": 0, 00:25:29.634 "data_size": 65536 00:25:29.634 }, 00:25:29.634 { 00:25:29.634 "name": "BaseBdev4", 00:25:29.634 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:29.634 "is_configured": true, 00:25:29.634 "data_offset": 0, 00:25:29.634 "data_size": 65536 00:25:29.634 } 00:25:29.634 ] 00:25:29.634 }' 00:25:29.634 10:50:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:29.634 10:50:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:29.634 10:50:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:29.634 10:50:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:29.634 10:50:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:31.008 "name": "raid_bdev1", 00:25:31.008 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:31.008 "strip_size_kb": 64, 00:25:31.008 "state": "online", 00:25:31.008 "raid_level": "raid5f", 00:25:31.008 "superblock": false, 00:25:31.008 "num_base_bdevs": 4, 00:25:31.008 "num_base_bdevs_discovered": 4, 00:25:31.008 "num_base_bdevs_operational": 4, 00:25:31.008 "process": { 00:25:31.008 "type": "rebuild", 00:25:31.008 "target": "spare", 00:25:31.008 "progress": { 00:25:31.008 "blocks": 111360, 00:25:31.008 "percent": 56 00:25:31.008 } 00:25:31.008 }, 00:25:31.008 "base_bdevs_list": [ 00:25:31.008 { 00:25:31.008 "name": "spare", 00:25:31.008 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:31.008 "is_configured": true, 00:25:31.008 "data_offset": 0, 00:25:31.008 "data_size": 65536 00:25:31.008 }, 00:25:31.008 { 00:25:31.008 "name": "BaseBdev2", 00:25:31.008 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:31.008 "is_configured": true, 00:25:31.008 "data_offset": 0, 00:25:31.008 "data_size": 65536 00:25:31.008 }, 00:25:31.008 { 00:25:31.008 "name": "BaseBdev3", 00:25:31.008 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:31.008 "is_configured": true, 00:25:31.008 "data_offset": 0, 00:25:31.008 "data_size": 65536 00:25:31.008 }, 00:25:31.008 { 00:25:31.008 "name": "BaseBdev4", 00:25:31.008 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:31.008 "is_configured": true, 00:25:31.008 "data_offset": 0, 00:25:31.008 "data_size": 65536 00:25:31.008 } 00:25:31.008 ] 00:25:31.008 }' 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:31.008 10:50:57 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.438 10:50:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:32.438 "name": "raid_bdev1", 00:25:32.438 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:32.438 "strip_size_kb": 64, 00:25:32.438 "state": "online", 00:25:32.438 "raid_level": "raid5f", 00:25:32.439 "superblock": false, 00:25:32.439 "num_base_bdevs": 4, 00:25:32.439 "num_base_bdevs_discovered": 4, 00:25:32.439 "num_base_bdevs_operational": 4, 00:25:32.439 "process": { 00:25:32.439 "type": "rebuild", 00:25:32.439 "target": "spare", 00:25:32.439 "progress": { 00:25:32.439 "blocks": 136320, 00:25:32.439 "percent": 69 00:25:32.439 } 00:25:32.439 }, 00:25:32.439 "base_bdevs_list": [ 00:25:32.439 { 00:25:32.439 "name": "spare", 00:25:32.439 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:32.439 "is_configured": true, 00:25:32.439 "data_offset": 0, 00:25:32.439 "data_size": 65536 00:25:32.439 }, 00:25:32.439 { 00:25:32.439 "name": "BaseBdev2", 00:25:32.439 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:32.439 "is_configured": true, 00:25:32.439 "data_offset": 0, 00:25:32.439 "data_size": 65536 00:25:32.439 }, 00:25:32.439 { 00:25:32.439 "name": "BaseBdev3", 00:25:32.439 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:32.439 "is_configured": true, 00:25:32.439 "data_offset": 0, 00:25:32.439 "data_size": 65536 00:25:32.439 }, 00:25:32.439 { 00:25:32.439 "name": "BaseBdev4", 00:25:32.439 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:32.439 "is_configured": true, 00:25:32.439 "data_offset": 0, 00:25:32.439 "data_size": 65536 00:25:32.439 } 00:25:32.439 ] 00:25:32.439 }' 00:25:32.439 10:50:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:32.439 10:50:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.439 10:50:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:32.439 10:50:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.439 10:50:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.814 "name": "raid_bdev1", 00:25:33.814 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:33.814 "strip_size_kb": 64, 00:25:33.814 "state": "online", 00:25:33.814 "raid_level": "raid5f", 00:25:33.814 "superblock": false, 00:25:33.814 "num_base_bdevs": 4, 00:25:33.814 "num_base_bdevs_discovered": 4, 00:25:33.814 "num_base_bdevs_operational": 4, 00:25:33.814 "process": { 00:25:33.814 "type": "rebuild", 00:25:33.814 "target": "spare", 00:25:33.814 "progress": { 00:25:33.814 "blocks": 163200, 00:25:33.814 "percent": 83 00:25:33.814 } 00:25:33.814 }, 00:25:33.814 "base_bdevs_list": [ 00:25:33.814 { 00:25:33.814 "name": "spare", 00:25:33.814 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:33.814 "is_configured": true, 00:25:33.814 "data_offset": 0, 00:25:33.814 "data_size": 65536 00:25:33.814 }, 00:25:33.814 { 00:25:33.814 "name": "BaseBdev2", 00:25:33.814 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:33.814 "is_configured": true, 00:25:33.814 "data_offset": 0, 00:25:33.814 "data_size": 65536 00:25:33.814 }, 00:25:33.814 { 00:25:33.814 "name": "BaseBdev3", 00:25:33.814 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:33.814 "is_configured": true, 00:25:33.814 "data_offset": 0, 00:25:33.814 "data_size": 65536 00:25:33.814 }, 00:25:33.814 { 00:25:33.814 "name": "BaseBdev4", 00:25:33.814 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:33.814 "is_configured": true, 00:25:33.814 "data_offset": 0, 00:25:33.814 "data_size": 65536 00:25:33.814 } 00:25:33.814 ] 00:25:33.814 }' 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.814 10:51:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.194 "name": "raid_bdev1", 00:25:35.194 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:35.194 "strip_size_kb": 64, 00:25:35.194 "state": "online", 00:25:35.194 "raid_level": "raid5f", 00:25:35.194 "superblock": false, 00:25:35.194 "num_base_bdevs": 4, 00:25:35.194 "num_base_bdevs_discovered": 4, 00:25:35.194 "num_base_bdevs_operational": 4, 00:25:35.194 "process": { 00:25:35.194 "type": "rebuild", 00:25:35.194 "target": "spare", 00:25:35.194 "progress": { 00:25:35.194 "blocks": 190080, 00:25:35.194 "percent": 96 00:25:35.194 } 00:25:35.194 }, 00:25:35.194 "base_bdevs_list": [ 00:25:35.194 { 00:25:35.194 "name": "spare", 00:25:35.194 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:35.194 "is_configured": true, 00:25:35.194 "data_offset": 0, 00:25:35.194 "data_size": 65536 00:25:35.194 }, 00:25:35.194 { 00:25:35.194 "name": "BaseBdev2", 00:25:35.194 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:35.194 "is_configured": true, 00:25:35.194 "data_offset": 0, 00:25:35.194 "data_size": 65536 00:25:35.194 }, 00:25:35.194 { 00:25:35.194 "name": "BaseBdev3", 00:25:35.194 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:35.194 "is_configured": true, 00:25:35.194 "data_offset": 0, 00:25:35.194 "data_size": 65536 00:25:35.194 }, 00:25:35.194 { 00:25:35.194 "name": "BaseBdev4", 00:25:35.194 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:35.194 "is_configured": true, 00:25:35.194 "data_offset": 0, 00:25:35.194 "data_size": 65536 00:25:35.194 } 00:25:35.194 ] 00:25:35.194 }' 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.194 10:51:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:35.453 [2024-07-24 10:51:02.086680] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:35.453 [2024-07-24 10:51:02.087107] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:35.453 [2024-07-24 10:51:02.087369] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.388 10:51:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.647 "name": "raid_bdev1", 00:25:36.647 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:36.647 "strip_size_kb": 64, 00:25:36.647 "state": "online", 00:25:36.647 "raid_level": "raid5f", 00:25:36.647 "superblock": false, 00:25:36.647 "num_base_bdevs": 4, 00:25:36.647 "num_base_bdevs_discovered": 4, 00:25:36.647 "num_base_bdevs_operational": 4, 00:25:36.647 "base_bdevs_list": [ 00:25:36.647 { 00:25:36.647 "name": "spare", 00:25:36.647 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:36.647 "is_configured": true, 00:25:36.647 "data_offset": 0, 00:25:36.647 "data_size": 65536 00:25:36.647 }, 00:25:36.647 { 00:25:36.647 "name": "BaseBdev2", 00:25:36.647 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:36.647 "is_configured": true, 00:25:36.647 "data_offset": 0, 00:25:36.647 "data_size": 65536 00:25:36.647 }, 00:25:36.647 { 00:25:36.647 "name": "BaseBdev3", 00:25:36.647 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:36.647 "is_configured": true, 00:25:36.647 "data_offset": 0, 00:25:36.647 "data_size": 65536 00:25:36.647 }, 00:25:36.647 { 00:25:36.647 "name": "BaseBdev4", 00:25:36.647 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:36.647 "is_configured": true, 00:25:36.647 "data_offset": 0, 00:25:36.647 "data_size": 65536 00:25:36.647 } 00:25:36.647 ] 00:25:36.647 }' 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@660 -- # break 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.647 10:51:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.904 10:51:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.904 "name": "raid_bdev1", 00:25:36.904 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:36.904 "strip_size_kb": 64, 00:25:36.904 "state": "online", 00:25:36.904 "raid_level": "raid5f", 00:25:36.904 "superblock": false, 00:25:36.904 "num_base_bdevs": 4, 00:25:36.904 "num_base_bdevs_discovered": 4, 00:25:36.904 "num_base_bdevs_operational": 4, 00:25:36.904 "base_bdevs_list": [ 00:25:36.904 { 00:25:36.904 "name": "spare", 00:25:36.904 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:36.904 "is_configured": true, 00:25:36.904 "data_offset": 0, 00:25:36.904 "data_size": 65536 00:25:36.904 }, 00:25:36.904 { 00:25:36.904 "name": "BaseBdev2", 00:25:36.904 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:36.904 "is_configured": true, 00:25:36.904 "data_offset": 0, 00:25:36.904 "data_size": 65536 00:25:36.904 }, 00:25:36.904 { 00:25:36.904 "name": "BaseBdev3", 00:25:36.904 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:36.904 "is_configured": true, 00:25:36.904 "data_offset": 0, 00:25:36.904 "data_size": 65536 00:25:36.904 }, 00:25:36.904 { 00:25:36.904 "name": "BaseBdev4", 00:25:36.904 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:36.904 "is_configured": true, 00:25:36.904 "data_offset": 0, 00:25:36.904 "data_size": 65536 00:25:36.904 } 00:25:36.904 ] 00:25:36.904 }' 00:25:36.904 10:51:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.162 10:51:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:37.420 10:51:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:37.420 "name": "raid_bdev1", 00:25:37.420 "uuid": "2591528f-eaec-426a-93fa-5792c8282a16", 00:25:37.420 "strip_size_kb": 64, 00:25:37.420 "state": "online", 00:25:37.420 "raid_level": "raid5f", 00:25:37.420 "superblock": false, 00:25:37.420 "num_base_bdevs": 4, 00:25:37.420 "num_base_bdevs_discovered": 4, 00:25:37.420 "num_base_bdevs_operational": 4, 00:25:37.420 "base_bdevs_list": [ 00:25:37.420 { 00:25:37.420 "name": "spare", 00:25:37.420 "uuid": "7af832c3-000d-5724-9543-22bae0fb8d8d", 00:25:37.420 "is_configured": true, 00:25:37.420 "data_offset": 0, 00:25:37.420 "data_size": 65536 00:25:37.420 }, 00:25:37.420 { 00:25:37.420 "name": "BaseBdev2", 00:25:37.420 "uuid": "b4d5beb0-4a32-4868-8087-f024f3a4146c", 00:25:37.420 "is_configured": true, 00:25:37.420 "data_offset": 0, 00:25:37.420 "data_size": 65536 00:25:37.420 }, 00:25:37.420 { 00:25:37.420 "name": "BaseBdev3", 00:25:37.420 "uuid": "e55c7191-51f5-43a2-83c0-f91db09ae3c1", 00:25:37.420 "is_configured": true, 00:25:37.420 "data_offset": 0, 00:25:37.420 "data_size": 65536 00:25:37.420 }, 00:25:37.420 { 00:25:37.420 "name": "BaseBdev4", 00:25:37.420 "uuid": "45ae6d4a-4fc4-43eb-a0c0-d7421ad32ef7", 00:25:37.420 "is_configured": true, 00:25:37.420 "data_offset": 0, 00:25:37.420 "data_size": 65536 00:25:37.420 } 00:25:37.420 ] 00:25:37.420 }' 00:25:37.420 10:51:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:37.420 10:51:03 -- common/autotest_common.sh@10 -- # set +x 00:25:37.986 10:51:04 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:38.246 [2024-07-24 10:51:04.861178] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:38.246 [2024-07-24 10:51:04.861568] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:38.246 [2024-07-24 10:51:04.861846] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:38.246 [2024-07-24 10:51:04.862075] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:38.246 [2024-07-24 10:51:04.862203] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:25:38.246 10:51:04 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.246 10:51:04 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:38.835 10:51:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:38.835 10:51:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:38.835 10:51:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@12 -- # local i 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:38.835 10:51:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:39.092 /dev/nbd0 00:25:39.092 10:51:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:39.092 10:51:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:39.092 10:51:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:39.092 10:51:05 -- common/autotest_common.sh@857 -- # local i 00:25:39.092 10:51:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:39.092 10:51:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:39.092 10:51:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:39.092 10:51:05 -- common/autotest_common.sh@861 -- # break 00:25:39.092 10:51:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:39.092 10:51:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:39.092 10:51:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:39.092 1+0 records in 00:25:39.092 1+0 records out 00:25:39.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735214 s, 5.6 MB/s 00:25:39.092 10:51:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:39.092 10:51:05 -- common/autotest_common.sh@874 -- # size=4096 00:25:39.092 10:51:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:39.092 10:51:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:39.092 10:51:05 -- common/autotest_common.sh@877 -- # return 0 00:25:39.092 10:51:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:39.092 10:51:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:39.092 10:51:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:39.349 /dev/nbd1 00:25:39.349 10:51:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:39.349 10:51:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:39.349 10:51:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:39.349 10:51:05 -- common/autotest_common.sh@857 -- # local i 00:25:39.349 10:51:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:39.349 10:51:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:39.349 10:51:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:39.349 10:51:05 -- common/autotest_common.sh@861 -- # break 00:25:39.349 10:51:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:39.349 10:51:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:39.349 10:51:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:39.349 1+0 records in 00:25:39.349 1+0 records out 00:25:39.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563458 s, 7.3 MB/s 00:25:39.349 10:51:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:39.349 10:51:05 -- common/autotest_common.sh@874 -- # size=4096 00:25:39.349 10:51:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:39.349 10:51:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:39.349 10:51:05 -- common/autotest_common.sh@877 -- # return 0 00:25:39.349 10:51:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:39.349 10:51:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:39.349 10:51:05 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:39.606 10:51:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:39.606 10:51:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:39.606 10:51:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:25:39.606 10:51:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:39.606 10:51:06 -- bdev/nbd_common.sh@51 -- # local i 00:25:39.606 10:51:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.606 10:51:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@41 -- # break 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@45 -- # return 0 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:39.865 10:51:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@41 -- # break 00:25:40.123 10:51:06 -- bdev/nbd_common.sh@45 -- # return 0 00:25:40.123 10:51:06 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:40.123 10:51:06 -- bdev/bdev_raid.sh@709 -- # killprocess 142167 00:25:40.123 10:51:06 -- common/autotest_common.sh@926 -- # '[' -z 142167 ']' 00:25:40.123 10:51:06 -- common/autotest_common.sh@930 -- # kill -0 142167 00:25:40.123 10:51:06 -- common/autotest_common.sh@931 -- # uname 00:25:40.123 10:51:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:40.123 10:51:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142167 00:25:40.123 10:51:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:40.123 10:51:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:40.123 10:51:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142167' 00:25:40.123 killing process with pid 142167 00:25:40.123 10:51:06 -- common/autotest_common.sh@945 -- # kill 142167 00:25:40.123 Received shutdown signal, test time was about 60.000000 seconds 00:25:40.123 00:25:40.123 Latency(us) 00:25:40.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.123 =================================================================================================================== 00:25:40.123 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:40.123 [2024-07-24 10:51:06.677929] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:40.123 10:51:06 -- common/autotest_common.sh@950 -- # wait 142167 00:25:40.123 [2024-07-24 10:51:06.742159] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:40.690 ************************************ 00:25:40.690 END TEST raid5f_rebuild_test 00:25:40.690 ************************************ 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:40.690 00:25:40.690 real 0m25.668s 00:25:40.690 user 0m38.429s 00:25:40.690 sys 0m3.153s 00:25:40.690 10:51:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.690 10:51:07 -- common/autotest_common.sh@10 -- # set +x 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:40.690 10:51:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:40.690 10:51:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.690 10:51:07 -- common/autotest_common.sh@10 -- # set +x 00:25:40.690 ************************************ 00:25:40.690 START TEST raid5f_rebuild_test_sb 00:25:40.690 ************************************ 00:25:40.690 10:51:07 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@544 -- # raid_pid=142790 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@545 -- # waitforlisten 142790 /var/tmp/spdk-raid.sock 00:25:40.690 10:51:07 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:40.690 10:51:07 -- common/autotest_common.sh@819 -- # '[' -z 142790 ']' 00:25:40.690 10:51:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:40.690 10:51:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:40.690 10:51:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:40.690 10:51:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.690 10:51:07 -- common/autotest_common.sh@10 -- # set +x 00:25:40.690 [2024-07-24 10:51:07.198199] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:40.690 [2024-07-24 10:51:07.198415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142790 ] 00:25:40.690 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:40.690 Zero copy mechanism will not be used. 00:25:40.690 [2024-07-24 10:51:07.338706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.949 [2024-07-24 10:51:07.456014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.949 [2024-07-24 10:51:07.529769] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:41.895 10:51:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.895 10:51:08 -- common/autotest_common.sh@852 -- # return 0 00:25:41.895 10:51:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:41.895 10:51:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:41.895 10:51:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:41.895 BaseBdev1_malloc 00:25:41.895 10:51:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:42.180 [2024-07-24 10:51:08.715466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:42.180 [2024-07-24 10:51:08.715655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.180 [2024-07-24 10:51:08.715712] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000005a80 00:25:42.180 [2024-07-24 10:51:08.715770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.180 [2024-07-24 10:51:08.718627] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.180 [2024-07-24 10:51:08.718691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:42.180 BaseBdev1 00:25:42.180 10:51:08 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:42.180 10:51:08 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:42.180 10:51:08 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:42.438 BaseBdev2_malloc 00:25:42.438 10:51:08 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:42.696 [2024-07-24 10:51:09.210355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:42.696 [2024-07-24 10:51:09.210495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:42.696 [2024-07-24 10:51:09.210547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:25:42.696 [2024-07-24 10:51:09.210598] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:42.696 [2024-07-24 10:51:09.213287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:42.696 [2024-07-24 10:51:09.213344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:42.696 BaseBdev2 00:25:42.696 10:51:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:42.696 10:51:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:42.696 10:51:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:42.954 BaseBdev3_malloc 00:25:42.954 10:51:09 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:43.212 [2024-07-24 10:51:09.782293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:43.212 [2024-07-24 10:51:09.782461] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.212 [2024-07-24 10:51:09.782517] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:43.212 [2024-07-24 10:51:09.782586] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.212 [2024-07-24 10:51:09.785267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.212 [2024-07-24 10:51:09.785330] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:43.212 BaseBdev3 00:25:43.212 10:51:09 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:43.212 10:51:09 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:43.212 10:51:09 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:43.470 BaseBdev4_malloc 00:25:43.470 10:51:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:43.728 [2024-07-24 10:51:10.261245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:43.728 [2024-07-24 10:51:10.261400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:43.728 [2024-07-24 10:51:10.261451] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:43.728 [2024-07-24 10:51:10.261506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:43.728 [2024-07-24 10:51:10.264230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:43.728 [2024-07-24 10:51:10.264308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:43.728 BaseBdev4 00:25:43.728 10:51:10 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:43.985 spare_malloc 00:25:43.985 10:51:10 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:44.243 spare_delay 00:25:44.243 10:51:10 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:44.502 [2024-07-24 10:51:11.024329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:44.502 [2024-07-24 10:51:11.024518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.502 [2024-07-24 10:51:11.024568] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:44.502 [2024-07-24 10:51:11.024622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.502 [2024-07-24 10:51:11.027698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.502 [2024-07-24 10:51:11.027777] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:44.502 spare 00:25:44.502 10:51:11 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:44.761 [2024-07-24 10:51:11.256486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:44.761 [2024-07-24 10:51:11.258796] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:44.761 [2024-07-24 10:51:11.258879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:44.761 [2024-07-24 10:51:11.258940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:44.761 [2024-07-24 10:51:11.259196] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:44.761 [2024-07-24 10:51:11.259220] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:44.761 [2024-07-24 10:51:11.259408] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:25:44.761 [2024-07-24 10:51:11.260321] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:44.761 [2024-07-24 10:51:11.260345] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:44.761 [2024-07-24 10:51:11.260578] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.761 10:51:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.019 10:51:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:45.019 "name": "raid_bdev1", 00:25:45.019 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:45.019 "strip_size_kb": 64, 00:25:45.019 "state": "online", 00:25:45.019 "raid_level": "raid5f", 00:25:45.019 "superblock": true, 00:25:45.019 "num_base_bdevs": 4, 00:25:45.019 "num_base_bdevs_discovered": 4, 00:25:45.019 "num_base_bdevs_operational": 4, 00:25:45.019 "base_bdevs_list": [ 00:25:45.019 { 00:25:45.019 "name": "BaseBdev1", 00:25:45.019 "uuid": "6b0e98b3-dfe0-546a-862d-6ecee4ac51b3", 00:25:45.019 "is_configured": true, 00:25:45.019 "data_offset": 2048, 00:25:45.019 "data_size": 63488 00:25:45.019 }, 00:25:45.019 { 00:25:45.019 "name": "BaseBdev2", 00:25:45.019 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:45.019 "is_configured": true, 00:25:45.019 "data_offset": 2048, 00:25:45.019 "data_size": 63488 00:25:45.019 }, 00:25:45.019 { 00:25:45.019 "name": "BaseBdev3", 00:25:45.019 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:45.019 "is_configured": true, 00:25:45.019 "data_offset": 2048, 00:25:45.019 "data_size": 63488 00:25:45.019 }, 00:25:45.019 { 00:25:45.019 "name": "BaseBdev4", 00:25:45.019 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:45.019 "is_configured": true, 00:25:45.019 "data_offset": 2048, 00:25:45.019 "data_size": 63488 00:25:45.019 } 00:25:45.019 ] 00:25:45.019 }' 00:25:45.019 10:51:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:45.019 10:51:11 -- common/autotest_common.sh@10 -- # set +x 00:25:45.586 10:51:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:45.586 10:51:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:45.844 [2024-07-24 10:51:12.449128] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:45.844 10:51:12 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:45.844 10:51:12 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:45.844 10:51:12 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.103 10:51:12 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:46.103 10:51:12 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:46.103 10:51:12 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:46.103 10:51:12 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@12 -- # local i 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.103 10:51:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:46.361 [2024-07-24 10:51:12.981172] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:25:46.361 /dev/nbd0 00:25:46.361 10:51:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:46.361 10:51:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:46.361 10:51:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:46.361 10:51:13 -- common/autotest_common.sh@857 -- # local i 00:25:46.361 10:51:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:46.361 10:51:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:46.361 10:51:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:46.619 10:51:13 -- common/autotest_common.sh@861 -- # break 00:25:46.619 10:51:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:46.619 10:51:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:46.619 10:51:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:46.619 1+0 records in 00:25:46.619 1+0 records out 00:25:46.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580335 s, 7.1 MB/s 00:25:46.619 10:51:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.619 10:51:13 -- common/autotest_common.sh@874 -- # size=4096 00:25:46.619 10:51:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:46.619 10:51:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:46.619 10:51:13 -- common/autotest_common.sh@877 -- # return 0 00:25:46.619 10:51:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:46.619 10:51:13 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:46.619 10:51:13 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:46.619 10:51:13 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:46.619 10:51:13 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:46.619 10:51:13 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:47.185 496+0 records in 00:25:47.185 496+0 records out 00:25:47.185 97517568 bytes (98 MB, 93 MiB) copied, 0.580126 s, 168 MB/s 00:25:47.185 10:51:13 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:47.185 10:51:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:47.185 10:51:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:47.185 10:51:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.185 10:51:13 -- bdev/nbd_common.sh@51 -- # local i 00:25:47.185 10:51:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.185 10:51:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:47.443 [2024-07-24 10:51:13.942190] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@41 -- # break 00:25:47.443 10:51:13 -- bdev/nbd_common.sh@45 -- # return 0 00:25:47.443 10:51:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:47.702 [2024-07-24 10:51:14.165859] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.702 10:51:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.961 10:51:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:47.961 "name": "raid_bdev1", 00:25:47.961 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:47.961 "strip_size_kb": 64, 00:25:47.961 "state": "online", 00:25:47.961 "raid_level": "raid5f", 00:25:47.961 "superblock": true, 00:25:47.961 "num_base_bdevs": 4, 00:25:47.961 "num_base_bdevs_discovered": 3, 00:25:47.961 "num_base_bdevs_operational": 3, 00:25:47.961 "base_bdevs_list": [ 00:25:47.961 { 00:25:47.961 "name": null, 00:25:47.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.961 "is_configured": false, 00:25:47.961 "data_offset": 2048, 00:25:47.961 "data_size": 63488 00:25:47.961 }, 00:25:47.961 { 00:25:47.961 "name": "BaseBdev2", 00:25:47.961 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:47.961 "is_configured": true, 00:25:47.961 "data_offset": 2048, 00:25:47.961 "data_size": 63488 00:25:47.961 }, 00:25:47.961 { 00:25:47.961 "name": "BaseBdev3", 00:25:47.961 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:47.961 "is_configured": true, 00:25:47.961 "data_offset": 2048, 00:25:47.961 "data_size": 63488 00:25:47.961 }, 00:25:47.961 { 00:25:47.961 "name": "BaseBdev4", 00:25:47.961 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:47.961 "is_configured": true, 00:25:47.961 "data_offset": 2048, 00:25:47.961 "data_size": 63488 00:25:47.961 } 00:25:47.961 ] 00:25:47.961 }' 00:25:47.961 10:51:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:47.961 10:51:14 -- common/autotest_common.sh@10 -- # set +x 00:25:48.528 10:51:15 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:48.786 [2024-07-24 10:51:15.374274] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:48.786 [2024-07-24 10:51:15.374401] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:48.786 [2024-07-24 10:51:15.380494] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:25:48.786 [2024-07-24 10:51:15.383552] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:48.786 10:51:15 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:50.161 "name": "raid_bdev1", 00:25:50.161 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:50.161 "strip_size_kb": 64, 00:25:50.161 "state": "online", 00:25:50.161 "raid_level": "raid5f", 00:25:50.161 "superblock": true, 00:25:50.161 "num_base_bdevs": 4, 00:25:50.161 "num_base_bdevs_discovered": 4, 00:25:50.161 "num_base_bdevs_operational": 4, 00:25:50.161 "process": { 00:25:50.161 "type": "rebuild", 00:25:50.161 "target": "spare", 00:25:50.161 "progress": { 00:25:50.161 "blocks": 23040, 00:25:50.161 "percent": 12 00:25:50.161 } 00:25:50.161 }, 00:25:50.161 "base_bdevs_list": [ 00:25:50.161 { 00:25:50.161 "name": "spare", 00:25:50.161 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:50.161 "is_configured": true, 00:25:50.161 "data_offset": 2048, 00:25:50.161 "data_size": 63488 00:25:50.161 }, 00:25:50.161 { 00:25:50.161 "name": "BaseBdev2", 00:25:50.161 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:50.161 "is_configured": true, 00:25:50.161 "data_offset": 2048, 00:25:50.161 "data_size": 63488 00:25:50.161 }, 00:25:50.161 { 00:25:50.161 "name": "BaseBdev3", 00:25:50.161 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:50.161 "is_configured": true, 00:25:50.161 "data_offset": 2048, 00:25:50.161 "data_size": 63488 00:25:50.161 }, 00:25:50.161 { 00:25:50.161 "name": "BaseBdev4", 00:25:50.161 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:50.161 "is_configured": true, 00:25:50.161 "data_offset": 2048, 00:25:50.161 "data_size": 63488 00:25:50.161 } 00:25:50.161 ] 00:25:50.161 }' 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:50.161 10:51:16 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:50.420 [2024-07-24 10:51:17.061673] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:50.420 [2024-07-24 10:51:17.101830] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:50.420 [2024-07-24 10:51:17.101961] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.678 10:51:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:50.936 10:51:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:50.936 "name": "raid_bdev1", 00:25:50.936 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:50.936 "strip_size_kb": 64, 00:25:50.936 "state": "online", 00:25:50.936 "raid_level": "raid5f", 00:25:50.936 "superblock": true, 00:25:50.936 "num_base_bdevs": 4, 00:25:50.936 "num_base_bdevs_discovered": 3, 00:25:50.936 "num_base_bdevs_operational": 3, 00:25:50.936 "base_bdevs_list": [ 00:25:50.936 { 00:25:50.936 "name": null, 00:25:50.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.936 "is_configured": false, 00:25:50.936 "data_offset": 2048, 00:25:50.936 "data_size": 63488 00:25:50.936 }, 00:25:50.936 { 00:25:50.936 "name": "BaseBdev2", 00:25:50.936 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:50.936 "is_configured": true, 00:25:50.936 "data_offset": 2048, 00:25:50.936 "data_size": 63488 00:25:50.936 }, 00:25:50.936 { 00:25:50.936 "name": "BaseBdev3", 00:25:50.936 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:50.936 "is_configured": true, 00:25:50.936 "data_offset": 2048, 00:25:50.936 "data_size": 63488 00:25:50.936 }, 00:25:50.936 { 00:25:50.936 "name": "BaseBdev4", 00:25:50.936 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:50.936 "is_configured": true, 00:25:50.936 "data_offset": 2048, 00:25:50.936 "data_size": 63488 00:25:50.936 } 00:25:50.936 ] 00:25:50.936 }' 00:25:50.936 10:51:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:50.936 10:51:17 -- common/autotest_common.sh@10 -- # set +x 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.503 10:51:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.762 10:51:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:51.762 "name": "raid_bdev1", 00:25:51.762 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:51.762 "strip_size_kb": 64, 00:25:51.762 "state": "online", 00:25:51.762 "raid_level": "raid5f", 00:25:51.762 "superblock": true, 00:25:51.762 "num_base_bdevs": 4, 00:25:51.762 "num_base_bdevs_discovered": 3, 00:25:51.762 "num_base_bdevs_operational": 3, 00:25:51.762 "base_bdevs_list": [ 00:25:51.762 { 00:25:51.762 "name": null, 00:25:51.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.762 "is_configured": false, 00:25:51.762 "data_offset": 2048, 00:25:51.762 "data_size": 63488 00:25:51.762 }, 00:25:51.762 { 00:25:51.762 "name": "BaseBdev2", 00:25:51.762 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:51.762 "is_configured": true, 00:25:51.762 "data_offset": 2048, 00:25:51.762 "data_size": 63488 00:25:51.762 }, 00:25:51.762 { 00:25:51.762 "name": "BaseBdev3", 00:25:51.762 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:51.762 "is_configured": true, 00:25:51.762 "data_offset": 2048, 00:25:51.762 "data_size": 63488 00:25:51.762 }, 00:25:51.762 { 00:25:51.762 "name": "BaseBdev4", 00:25:51.762 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:51.762 "is_configured": true, 00:25:51.762 "data_offset": 2048, 00:25:51.762 "data_size": 63488 00:25:51.762 } 00:25:51.762 ] 00:25:51.762 }' 00:25:51.762 10:51:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:52.020 10:51:18 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:52.020 10:51:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.020 10:51:18 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:52.020 10:51:18 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:52.278 [2024-07-24 10:51:18.783107] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:52.278 [2024-07-24 10:51:18.783190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:52.278 [2024-07-24 10:51:18.789262] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027240 00:25:52.278 [2024-07-24 10:51:18.792283] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:52.278 10:51:18 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.213 10:51:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.471 10:51:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.471 "name": "raid_bdev1", 00:25:53.471 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:53.471 "strip_size_kb": 64, 00:25:53.471 "state": "online", 00:25:53.471 "raid_level": "raid5f", 00:25:53.471 "superblock": true, 00:25:53.471 "num_base_bdevs": 4, 00:25:53.471 "num_base_bdevs_discovered": 4, 00:25:53.471 "num_base_bdevs_operational": 4, 00:25:53.471 "process": { 00:25:53.472 "type": "rebuild", 00:25:53.472 "target": "spare", 00:25:53.472 "progress": { 00:25:53.472 "blocks": 23040, 00:25:53.472 "percent": 12 00:25:53.472 } 00:25:53.472 }, 00:25:53.472 "base_bdevs_list": [ 00:25:53.472 { 00:25:53.472 "name": "spare", 00:25:53.472 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:53.472 "is_configured": true, 00:25:53.472 "data_offset": 2048, 00:25:53.472 "data_size": 63488 00:25:53.472 }, 00:25:53.472 { 00:25:53.472 "name": "BaseBdev2", 00:25:53.472 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:53.472 "is_configured": true, 00:25:53.472 "data_offset": 2048, 00:25:53.472 "data_size": 63488 00:25:53.472 }, 00:25:53.472 { 00:25:53.472 "name": "BaseBdev3", 00:25:53.472 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:53.472 "is_configured": true, 00:25:53.472 "data_offset": 2048, 00:25:53.472 "data_size": 63488 00:25:53.472 }, 00:25:53.472 { 00:25:53.472 "name": "BaseBdev4", 00:25:53.472 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:53.472 "is_configured": true, 00:25:53.472 "data_offset": 2048, 00:25:53.472 "data_size": 63488 00:25:53.472 } 00:25:53.472 ] 00:25:53.472 }' 00:25:53.472 10:51:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:53.730 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@657 -- # local timeout=760 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.730 10:51:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.989 10:51:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:53.989 "name": "raid_bdev1", 00:25:53.989 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:53.989 "strip_size_kb": 64, 00:25:53.989 "state": "online", 00:25:53.989 "raid_level": "raid5f", 00:25:53.989 "superblock": true, 00:25:53.989 "num_base_bdevs": 4, 00:25:53.989 "num_base_bdevs_discovered": 4, 00:25:53.989 "num_base_bdevs_operational": 4, 00:25:53.989 "process": { 00:25:53.989 "type": "rebuild", 00:25:53.989 "target": "spare", 00:25:53.989 "progress": { 00:25:53.989 "blocks": 30720, 00:25:53.989 "percent": 16 00:25:53.989 } 00:25:53.989 }, 00:25:53.989 "base_bdevs_list": [ 00:25:53.989 { 00:25:53.989 "name": "spare", 00:25:53.989 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:53.989 "is_configured": true, 00:25:53.989 "data_offset": 2048, 00:25:53.989 "data_size": 63488 00:25:53.989 }, 00:25:53.989 { 00:25:53.989 "name": "BaseBdev2", 00:25:53.989 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:53.989 "is_configured": true, 00:25:53.989 "data_offset": 2048, 00:25:53.989 "data_size": 63488 00:25:53.989 }, 00:25:53.989 { 00:25:53.989 "name": "BaseBdev3", 00:25:53.989 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:53.989 "is_configured": true, 00:25:53.989 "data_offset": 2048, 00:25:53.989 "data_size": 63488 00:25:53.989 }, 00:25:53.989 { 00:25:53.989 "name": "BaseBdev4", 00:25:53.989 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:53.989 "is_configured": true, 00:25:53.989 "data_offset": 2048, 00:25:53.989 "data_size": 63488 00:25:53.989 } 00:25:53.989 ] 00:25:53.989 }' 00:25:53.989 10:51:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:53.989 10:51:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:53.989 10:51:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:53.989 10:51:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:53.989 10:51:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:55.366 "name": "raid_bdev1", 00:25:55.366 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:55.366 "strip_size_kb": 64, 00:25:55.366 "state": "online", 00:25:55.366 "raid_level": "raid5f", 00:25:55.366 "superblock": true, 00:25:55.366 "num_base_bdevs": 4, 00:25:55.366 "num_base_bdevs_discovered": 4, 00:25:55.366 "num_base_bdevs_operational": 4, 00:25:55.366 "process": { 00:25:55.366 "type": "rebuild", 00:25:55.366 "target": "spare", 00:25:55.366 "progress": { 00:25:55.366 "blocks": 57600, 00:25:55.366 "percent": 30 00:25:55.366 } 00:25:55.366 }, 00:25:55.366 "base_bdevs_list": [ 00:25:55.366 { 00:25:55.366 "name": "spare", 00:25:55.366 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:55.366 "is_configured": true, 00:25:55.366 "data_offset": 2048, 00:25:55.366 "data_size": 63488 00:25:55.366 }, 00:25:55.366 { 00:25:55.366 "name": "BaseBdev2", 00:25:55.366 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:55.366 "is_configured": true, 00:25:55.366 "data_offset": 2048, 00:25:55.366 "data_size": 63488 00:25:55.366 }, 00:25:55.366 { 00:25:55.366 "name": "BaseBdev3", 00:25:55.366 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:55.366 "is_configured": true, 00:25:55.366 "data_offset": 2048, 00:25:55.366 "data_size": 63488 00:25:55.366 }, 00:25:55.366 { 00:25:55.366 "name": "BaseBdev4", 00:25:55.366 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:55.366 "is_configured": true, 00:25:55.366 "data_offset": 2048, 00:25:55.366 "data_size": 63488 00:25:55.366 } 00:25:55.366 ] 00:25:55.366 }' 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:55.366 10:51:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:55.366 10:51:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:55.366 10:51:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:56.744 "name": "raid_bdev1", 00:25:56.744 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:56.744 "strip_size_kb": 64, 00:25:56.744 "state": "online", 00:25:56.744 "raid_level": "raid5f", 00:25:56.744 "superblock": true, 00:25:56.744 "num_base_bdevs": 4, 00:25:56.744 "num_base_bdevs_discovered": 4, 00:25:56.744 "num_base_bdevs_operational": 4, 00:25:56.744 "process": { 00:25:56.744 "type": "rebuild", 00:25:56.744 "target": "spare", 00:25:56.744 "progress": { 00:25:56.744 "blocks": 84480, 00:25:56.744 "percent": 44 00:25:56.744 } 00:25:56.744 }, 00:25:56.744 "base_bdevs_list": [ 00:25:56.744 { 00:25:56.744 "name": "spare", 00:25:56.744 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:56.744 "is_configured": true, 00:25:56.744 "data_offset": 2048, 00:25:56.744 "data_size": 63488 00:25:56.744 }, 00:25:56.744 { 00:25:56.744 "name": "BaseBdev2", 00:25:56.744 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:56.744 "is_configured": true, 00:25:56.744 "data_offset": 2048, 00:25:56.744 "data_size": 63488 00:25:56.744 }, 00:25:56.744 { 00:25:56.744 "name": "BaseBdev3", 00:25:56.744 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:56.744 "is_configured": true, 00:25:56.744 "data_offset": 2048, 00:25:56.744 "data_size": 63488 00:25:56.744 }, 00:25:56.744 { 00:25:56.744 "name": "BaseBdev4", 00:25:56.744 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:56.744 "is_configured": true, 00:25:56.744 "data_offset": 2048, 00:25:56.744 "data_size": 63488 00:25:56.744 } 00:25:56.744 ] 00:25:56.744 }' 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:56.744 10:51:23 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:58.121 "name": "raid_bdev1", 00:25:58.121 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:58.121 "strip_size_kb": 64, 00:25:58.121 "state": "online", 00:25:58.121 "raid_level": "raid5f", 00:25:58.121 "superblock": true, 00:25:58.121 "num_base_bdevs": 4, 00:25:58.121 "num_base_bdevs_discovered": 4, 00:25:58.121 "num_base_bdevs_operational": 4, 00:25:58.121 "process": { 00:25:58.121 "type": "rebuild", 00:25:58.121 "target": "spare", 00:25:58.121 "progress": { 00:25:58.121 "blocks": 111360, 00:25:58.121 "percent": 58 00:25:58.121 } 00:25:58.121 }, 00:25:58.121 "base_bdevs_list": [ 00:25:58.121 { 00:25:58.121 "name": "spare", 00:25:58.121 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:58.121 "is_configured": true, 00:25:58.121 "data_offset": 2048, 00:25:58.121 "data_size": 63488 00:25:58.121 }, 00:25:58.121 { 00:25:58.121 "name": "BaseBdev2", 00:25:58.121 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:58.121 "is_configured": true, 00:25:58.121 "data_offset": 2048, 00:25:58.121 "data_size": 63488 00:25:58.121 }, 00:25:58.121 { 00:25:58.121 "name": "BaseBdev3", 00:25:58.121 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:58.121 "is_configured": true, 00:25:58.121 "data_offset": 2048, 00:25:58.121 "data_size": 63488 00:25:58.121 }, 00:25:58.121 { 00:25:58.121 "name": "BaseBdev4", 00:25:58.121 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:58.121 "is_configured": true, 00:25:58.121 "data_offset": 2048, 00:25:58.121 "data_size": 63488 00:25:58.121 } 00:25:58.121 ] 00:25:58.121 }' 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.121 10:51:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.498 10:51:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.498 10:51:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:59.498 "name": "raid_bdev1", 00:25:59.498 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:25:59.498 "strip_size_kb": 64, 00:25:59.498 "state": "online", 00:25:59.498 "raid_level": "raid5f", 00:25:59.498 "superblock": true, 00:25:59.498 "num_base_bdevs": 4, 00:25:59.498 "num_base_bdevs_discovered": 4, 00:25:59.498 "num_base_bdevs_operational": 4, 00:25:59.498 "process": { 00:25:59.498 "type": "rebuild", 00:25:59.498 "target": "spare", 00:25:59.498 "progress": { 00:25:59.498 "blocks": 136320, 00:25:59.498 "percent": 71 00:25:59.498 } 00:25:59.498 }, 00:25:59.498 "base_bdevs_list": [ 00:25:59.498 { 00:25:59.498 "name": "spare", 00:25:59.498 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:25:59.498 "is_configured": true, 00:25:59.498 "data_offset": 2048, 00:25:59.498 "data_size": 63488 00:25:59.498 }, 00:25:59.498 { 00:25:59.498 "name": "BaseBdev2", 00:25:59.498 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:25:59.498 "is_configured": true, 00:25:59.498 "data_offset": 2048, 00:25:59.498 "data_size": 63488 00:25:59.498 }, 00:25:59.498 { 00:25:59.498 "name": "BaseBdev3", 00:25:59.498 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:25:59.498 "is_configured": true, 00:25:59.498 "data_offset": 2048, 00:25:59.498 "data_size": 63488 00:25:59.499 }, 00:25:59.499 { 00:25:59.499 "name": "BaseBdev4", 00:25:59.499 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:25:59.499 "is_configured": true, 00:25:59.499 "data_offset": 2048, 00:25:59.499 "data_size": 63488 00:25:59.499 } 00:25:59.499 ] 00:25:59.499 }' 00:25:59.499 10:51:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:59.499 10:51:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:59.499 10:51:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:59.499 10:51:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:59.499 10:51:26 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:00.875 "name": "raid_bdev1", 00:26:00.875 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:00.875 "strip_size_kb": 64, 00:26:00.875 "state": "online", 00:26:00.875 "raid_level": "raid5f", 00:26:00.875 "superblock": true, 00:26:00.875 "num_base_bdevs": 4, 00:26:00.875 "num_base_bdevs_discovered": 4, 00:26:00.875 "num_base_bdevs_operational": 4, 00:26:00.875 "process": { 00:26:00.875 "type": "rebuild", 00:26:00.875 "target": "spare", 00:26:00.875 "progress": { 00:26:00.875 "blocks": 163200, 00:26:00.875 "percent": 85 00:26:00.875 } 00:26:00.875 }, 00:26:00.875 "base_bdevs_list": [ 00:26:00.875 { 00:26:00.875 "name": "spare", 00:26:00.875 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:00.875 "is_configured": true, 00:26:00.875 "data_offset": 2048, 00:26:00.875 "data_size": 63488 00:26:00.875 }, 00:26:00.875 { 00:26:00.875 "name": "BaseBdev2", 00:26:00.875 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:00.875 "is_configured": true, 00:26:00.875 "data_offset": 2048, 00:26:00.875 "data_size": 63488 00:26:00.875 }, 00:26:00.875 { 00:26:00.875 "name": "BaseBdev3", 00:26:00.875 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:00.875 "is_configured": true, 00:26:00.875 "data_offset": 2048, 00:26:00.875 "data_size": 63488 00:26:00.875 }, 00:26:00.875 { 00:26:00.875 "name": "BaseBdev4", 00:26:00.875 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:00.875 "is_configured": true, 00:26:00.875 "data_offset": 2048, 00:26:00.875 "data_size": 63488 00:26:00.875 } 00:26:00.875 ] 00:26:00.875 }' 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:00.875 10:51:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:00.876 10:51:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:00.876 10:51:27 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:02.261 10:51:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:02.261 "name": "raid_bdev1", 00:26:02.261 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:02.261 "strip_size_kb": 64, 00:26:02.261 "state": "online", 00:26:02.261 "raid_level": "raid5f", 00:26:02.261 "superblock": true, 00:26:02.261 "num_base_bdevs": 4, 00:26:02.261 "num_base_bdevs_discovered": 4, 00:26:02.262 "num_base_bdevs_operational": 4, 00:26:02.262 "process": { 00:26:02.262 "type": "rebuild", 00:26:02.262 "target": "spare", 00:26:02.262 "progress": { 00:26:02.262 "blocks": 190080, 00:26:02.262 "percent": 99 00:26:02.262 } 00:26:02.262 }, 00:26:02.262 "base_bdevs_list": [ 00:26:02.262 { 00:26:02.262 "name": "spare", 00:26:02.262 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:02.262 "is_configured": true, 00:26:02.262 "data_offset": 2048, 00:26:02.262 "data_size": 63488 00:26:02.262 }, 00:26:02.262 { 00:26:02.262 "name": "BaseBdev2", 00:26:02.262 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:02.262 "is_configured": true, 00:26:02.262 "data_offset": 2048, 00:26:02.262 "data_size": 63488 00:26:02.262 }, 00:26:02.262 { 00:26:02.262 "name": "BaseBdev3", 00:26:02.262 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:02.262 "is_configured": true, 00:26:02.262 "data_offset": 2048, 00:26:02.262 "data_size": 63488 00:26:02.262 }, 00:26:02.262 { 00:26:02.262 "name": "BaseBdev4", 00:26:02.262 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:02.262 "is_configured": true, 00:26:02.262 "data_offset": 2048, 00:26:02.262 "data_size": 63488 00:26:02.262 } 00:26:02.262 ] 00:26:02.262 }' 00:26:02.262 10:51:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:02.262 10:51:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:02.262 10:51:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:02.262 [2024-07-24 10:51:28.889890] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:02.262 [2024-07-24 10:51:28.890014] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:02.262 [2024-07-24 10:51:28.890223] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.262 10:51:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:02.262 10:51:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.636 10:51:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.636 10:51:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:03.636 "name": "raid_bdev1", 00:26:03.636 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:03.636 "strip_size_kb": 64, 00:26:03.636 "state": "online", 00:26:03.636 "raid_level": "raid5f", 00:26:03.636 "superblock": true, 00:26:03.636 "num_base_bdevs": 4, 00:26:03.636 "num_base_bdevs_discovered": 4, 00:26:03.636 "num_base_bdevs_operational": 4, 00:26:03.636 "base_bdevs_list": [ 00:26:03.636 { 00:26:03.636 "name": "spare", 00:26:03.636 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:03.636 "is_configured": true, 00:26:03.636 "data_offset": 2048, 00:26:03.636 "data_size": 63488 00:26:03.637 }, 00:26:03.637 { 00:26:03.637 "name": "BaseBdev2", 00:26:03.637 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:03.637 "is_configured": true, 00:26:03.637 "data_offset": 2048, 00:26:03.637 "data_size": 63488 00:26:03.637 }, 00:26:03.637 { 00:26:03.637 "name": "BaseBdev3", 00:26:03.637 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:03.637 "is_configured": true, 00:26:03.637 "data_offset": 2048, 00:26:03.637 "data_size": 63488 00:26:03.637 }, 00:26:03.637 { 00:26:03.637 "name": "BaseBdev4", 00:26:03.637 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:03.637 "is_configured": true, 00:26:03.637 "data_offset": 2048, 00:26:03.637 "data_size": 63488 00:26:03.637 } 00:26:03.637 ] 00:26:03.637 }' 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@660 -- # break 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:03.637 10:51:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.895 10:51:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:03.895 "name": "raid_bdev1", 00:26:03.895 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:03.895 "strip_size_kb": 64, 00:26:03.895 "state": "online", 00:26:03.895 "raid_level": "raid5f", 00:26:03.895 "superblock": true, 00:26:03.895 "num_base_bdevs": 4, 00:26:03.895 "num_base_bdevs_discovered": 4, 00:26:03.895 "num_base_bdevs_operational": 4, 00:26:03.895 "base_bdevs_list": [ 00:26:03.895 { 00:26:03.895 "name": "spare", 00:26:03.895 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:03.895 "is_configured": true, 00:26:03.895 "data_offset": 2048, 00:26:03.895 "data_size": 63488 00:26:03.895 }, 00:26:03.895 { 00:26:03.895 "name": "BaseBdev2", 00:26:03.895 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:03.895 "is_configured": true, 00:26:03.895 "data_offset": 2048, 00:26:03.895 "data_size": 63488 00:26:03.895 }, 00:26:03.895 { 00:26:03.895 "name": "BaseBdev3", 00:26:03.895 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:03.895 "is_configured": true, 00:26:03.895 "data_offset": 2048, 00:26:03.895 "data_size": 63488 00:26:03.895 }, 00:26:03.895 { 00:26:03.895 "name": "BaseBdev4", 00:26:03.895 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:03.895 "is_configured": true, 00:26:03.895 "data_offset": 2048, 00:26:03.895 "data_size": 63488 00:26:03.895 } 00:26:03.895 ] 00:26:03.895 }' 00:26:03.895 10:51:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.153 10:51:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.411 10:51:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:04.411 "name": "raid_bdev1", 00:26:04.411 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:04.411 "strip_size_kb": 64, 00:26:04.411 "state": "online", 00:26:04.411 "raid_level": "raid5f", 00:26:04.411 "superblock": true, 00:26:04.411 "num_base_bdevs": 4, 00:26:04.411 "num_base_bdevs_discovered": 4, 00:26:04.411 "num_base_bdevs_operational": 4, 00:26:04.411 "base_bdevs_list": [ 00:26:04.411 { 00:26:04.411 "name": "spare", 00:26:04.411 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:04.411 "is_configured": true, 00:26:04.411 "data_offset": 2048, 00:26:04.411 "data_size": 63488 00:26:04.411 }, 00:26:04.411 { 00:26:04.411 "name": "BaseBdev2", 00:26:04.411 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:04.411 "is_configured": true, 00:26:04.411 "data_offset": 2048, 00:26:04.411 "data_size": 63488 00:26:04.411 }, 00:26:04.411 { 00:26:04.411 "name": "BaseBdev3", 00:26:04.411 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:04.411 "is_configured": true, 00:26:04.411 "data_offset": 2048, 00:26:04.411 "data_size": 63488 00:26:04.411 }, 00:26:04.411 { 00:26:04.411 "name": "BaseBdev4", 00:26:04.411 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:04.411 "is_configured": true, 00:26:04.411 "data_offset": 2048, 00:26:04.411 "data_size": 63488 00:26:04.411 } 00:26:04.411 ] 00:26:04.411 }' 00:26:04.411 10:51:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:04.411 10:51:30 -- common/autotest_common.sh@10 -- # set +x 00:26:04.979 10:51:31 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:05.237 [2024-07-24 10:51:31.850721] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:05.237 [2024-07-24 10:51:31.850791] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.237 [2024-07-24 10:51:31.850976] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.237 [2024-07-24 10:51:31.851119] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.237 [2024-07-24 10:51:31.851142] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:26:05.237 10:51:31 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.237 10:51:31 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:05.495 10:51:32 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:05.495 10:51:32 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:05.495 10:51:32 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@12 -- # local i 00:26:05.495 10:51:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:05.753 10:51:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:05.753 10:51:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:06.011 /dev/nbd0 00:26:06.011 10:51:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:06.011 10:51:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:06.011 10:51:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:06.011 10:51:32 -- common/autotest_common.sh@857 -- # local i 00:26:06.011 10:51:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:06.011 10:51:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:06.011 10:51:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:06.011 10:51:32 -- common/autotest_common.sh@861 -- # break 00:26:06.011 10:51:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:06.011 10:51:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:06.011 10:51:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:06.011 1+0 records in 00:26:06.011 1+0 records out 00:26:06.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598933 s, 6.8 MB/s 00:26:06.011 10:51:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:06.011 10:51:32 -- common/autotest_common.sh@874 -- # size=4096 00:26:06.011 10:51:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:06.011 10:51:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:06.011 10:51:32 -- common/autotest_common.sh@877 -- # return 0 00:26:06.011 10:51:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:06.011 10:51:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:06.011 10:51:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:06.270 /dev/nbd1 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:06.270 10:51:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:06.270 10:51:32 -- common/autotest_common.sh@857 -- # local i 00:26:06.270 10:51:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:06.270 10:51:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:06.270 10:51:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:06.270 10:51:32 -- common/autotest_common.sh@861 -- # break 00:26:06.270 10:51:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:06.270 10:51:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:06.270 10:51:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:06.270 1+0 records in 00:26:06.270 1+0 records out 00:26:06.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604668 s, 6.8 MB/s 00:26:06.270 10:51:32 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:06.270 10:51:32 -- common/autotest_common.sh@874 -- # size=4096 00:26:06.270 10:51:32 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:06.270 10:51:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:06.270 10:51:32 -- common/autotest_common.sh@877 -- # return 0 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:06.270 10:51:32 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:06.270 10:51:32 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@51 -- # local i 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.270 10:51:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@41 -- # break 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:06.529 10:51:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@41 -- # break 00:26:06.790 10:51:33 -- bdev/nbd_common.sh@45 -- # return 0 00:26:06.790 10:51:33 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:06.790 10:51:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:06.790 10:51:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:06.790 10:51:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:07.358 10:51:33 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:07.358 [2024-07-24 10:51:33.951549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:07.358 [2024-07-24 10:51:33.951692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.358 [2024-07-24 10:51:33.951798] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:07.358 [2024-07-24 10:51:33.951845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.358 [2024-07-24 10:51:33.954953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.358 [2024-07-24 10:51:33.955026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:07.358 [2024-07-24 10:51:33.955160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:07.358 [2024-07-24 10:51:33.955217] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:07.358 BaseBdev1 00:26:07.358 10:51:33 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:07.358 10:51:33 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:07.358 10:51:33 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:07.616 10:51:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:07.876 [2024-07-24 10:51:34.471729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:07.876 [2024-07-24 10:51:34.471865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:07.876 [2024-07-24 10:51:34.471917] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:07.876 [2024-07-24 10:51:34.471946] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:07.876 [2024-07-24 10:51:34.472540] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:07.876 [2024-07-24 10:51:34.472594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:07.876 [2024-07-24 10:51:34.472686] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:07.876 [2024-07-24 10:51:34.472702] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:07.876 [2024-07-24 10:51:34.472709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:07.876 [2024-07-24 10:51:34.472750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state configuring 00:26:07.876 [2024-07-24 10:51:34.472840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:07.876 BaseBdev2 00:26:07.876 10:51:34 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:07.876 10:51:34 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:07.876 10:51:34 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:08.134 10:51:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:08.391 [2024-07-24 10:51:34.983877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:08.391 [2024-07-24 10:51:34.984054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.391 [2024-07-24 10:51:34.984118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:08.391 [2024-07-24 10:51:34.984170] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.391 [2024-07-24 10:51:34.984769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.391 [2024-07-24 10:51:34.984829] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:08.391 [2024-07-24 10:51:34.984928] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:08.391 [2024-07-24 10:51:34.984956] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:08.391 BaseBdev3 00:26:08.391 10:51:35 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:08.391 10:51:35 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:08.391 10:51:35 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:08.649 10:51:35 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:08.907 [2024-07-24 10:51:35.484079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:08.907 [2024-07-24 10:51:35.484243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:08.907 [2024-07-24 10:51:35.484295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:08.907 [2024-07-24 10:51:35.484331] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:08.907 [2024-07-24 10:51:35.484870] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:08.907 [2024-07-24 10:51:35.484927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:08.907 [2024-07-24 10:51:35.485031] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:08.907 [2024-07-24 10:51:35.485060] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:08.907 BaseBdev4 00:26:08.907 10:51:35 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:09.165 10:51:35 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:09.423 [2024-07-24 10:51:35.992298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:09.423 [2024-07-24 10:51:35.992449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:09.423 [2024-07-24 10:51:35.992495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:09.423 [2024-07-24 10:51:35.992548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:09.423 [2024-07-24 10:51:35.993151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:09.423 [2024-07-24 10:51:35.993213] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:09.423 [2024-07-24 10:51:35.993339] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:09.423 [2024-07-24 10:51:35.993389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:09.423 spare 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.423 10:51:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.423 [2024-07-24 10:51:36.093542] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:26:09.423 [2024-07-24 10:51:36.093610] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:09.423 [2024-07-24 10:51:36.093909] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045ea0 00:26:09.423 [2024-07-24 10:51:36.095076] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:26:09.423 [2024-07-24 10:51:36.095103] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:26:09.423 [2024-07-24 10:51:36.095312] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.681 10:51:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:09.681 "name": "raid_bdev1", 00:26:09.681 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:09.681 "strip_size_kb": 64, 00:26:09.681 "state": "online", 00:26:09.681 "raid_level": "raid5f", 00:26:09.681 "superblock": true, 00:26:09.681 "num_base_bdevs": 4, 00:26:09.681 "num_base_bdevs_discovered": 4, 00:26:09.681 "num_base_bdevs_operational": 4, 00:26:09.681 "base_bdevs_list": [ 00:26:09.681 { 00:26:09.681 "name": "spare", 00:26:09.681 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:09.681 "is_configured": true, 00:26:09.681 "data_offset": 2048, 00:26:09.681 "data_size": 63488 00:26:09.682 }, 00:26:09.682 { 00:26:09.682 "name": "BaseBdev2", 00:26:09.682 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:09.682 "is_configured": true, 00:26:09.682 "data_offset": 2048, 00:26:09.682 "data_size": 63488 00:26:09.682 }, 00:26:09.682 { 00:26:09.682 "name": "BaseBdev3", 00:26:09.682 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:09.682 "is_configured": true, 00:26:09.682 "data_offset": 2048, 00:26:09.682 "data_size": 63488 00:26:09.682 }, 00:26:09.682 { 00:26:09.682 "name": "BaseBdev4", 00:26:09.682 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:09.682 "is_configured": true, 00:26:09.682 "data_offset": 2048, 00:26:09.682 "data_size": 63488 00:26:09.682 } 00:26:09.682 ] 00:26:09.682 }' 00:26:09.682 10:51:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:09.682 10:51:36 -- common/autotest_common.sh@10 -- # set +x 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.615 10:51:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:10.873 "name": "raid_bdev1", 00:26:10.873 "uuid": "3e7c09df-8965-492a-92a0-0dad0d2f1ed6", 00:26:10.873 "strip_size_kb": 64, 00:26:10.873 "state": "online", 00:26:10.873 "raid_level": "raid5f", 00:26:10.873 "superblock": true, 00:26:10.873 "num_base_bdevs": 4, 00:26:10.873 "num_base_bdevs_discovered": 4, 00:26:10.873 "num_base_bdevs_operational": 4, 00:26:10.873 "base_bdevs_list": [ 00:26:10.873 { 00:26:10.873 "name": "spare", 00:26:10.873 "uuid": "c4990f16-70f3-5df9-8dd6-d5c76fb19c1d", 00:26:10.873 "is_configured": true, 00:26:10.873 "data_offset": 2048, 00:26:10.873 "data_size": 63488 00:26:10.873 }, 00:26:10.873 { 00:26:10.873 "name": "BaseBdev2", 00:26:10.873 "uuid": "58668f35-05ea-5432-8aaa-dfac5e368d94", 00:26:10.873 "is_configured": true, 00:26:10.873 "data_offset": 2048, 00:26:10.873 "data_size": 63488 00:26:10.873 }, 00:26:10.873 { 00:26:10.873 "name": "BaseBdev3", 00:26:10.873 "uuid": "a2d966f5-fb5d-576d-b6fe-14ce87a9f97a", 00:26:10.873 "is_configured": true, 00:26:10.873 "data_offset": 2048, 00:26:10.873 "data_size": 63488 00:26:10.873 }, 00:26:10.873 { 00:26:10.873 "name": "BaseBdev4", 00:26:10.873 "uuid": "d6e0f910-b940-5004-ae6c-9deecd7ad5e4", 00:26:10.873 "is_configured": true, 00:26:10.873 "data_offset": 2048, 00:26:10.873 "data_size": 63488 00:26:10.873 } 00:26:10.873 ] 00:26:10.873 }' 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.873 10:51:37 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:11.131 10:51:37 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:11.131 10:51:37 -- bdev/bdev_raid.sh@709 -- # killprocess 142790 00:26:11.131 10:51:37 -- common/autotest_common.sh@926 -- # '[' -z 142790 ']' 00:26:11.131 10:51:37 -- common/autotest_common.sh@930 -- # kill -0 142790 00:26:11.131 10:51:37 -- common/autotest_common.sh@931 -- # uname 00:26:11.131 10:51:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.131 10:51:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142790 00:26:11.131 killing process with pid 142790 00:26:11.131 Received shutdown signal, test time was about 60.000000 seconds 00:26:11.131 00:26:11.131 Latency(us) 00:26:11.131 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.131 =================================================================================================================== 00:26:11.131 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:11.131 10:51:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:11.131 10:51:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:11.131 10:51:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142790' 00:26:11.131 10:51:37 -- common/autotest_common.sh@945 -- # kill 142790 00:26:11.131 10:51:37 -- common/autotest_common.sh@950 -- # wait 142790 00:26:11.131 [2024-07-24 10:51:37.696754] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:11.131 [2024-07-24 10:51:37.696870] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:11.131 [2024-07-24 10:51:37.696984] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:11.131 [2024-07-24 10:51:37.697005] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:26:11.131 [2024-07-24 10:51:37.752290] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:11.390 ************************************ 00:26:11.390 END TEST raid5f_rebuild_test_sb 00:26:11.390 ************************************ 00:26:11.390 10:51:38 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:11.390 00:26:11.390 real 0m30.866s 00:26:11.390 user 0m48.716s 00:26:11.390 sys 0m3.541s 00:26:11.390 10:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.390 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 10:51:38 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:11.390 00:26:11.390 real 12m27.228s 00:26:11.390 user 21m13.774s 00:26:11.390 sys 1m43.964s 00:26:11.390 10:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:11.390 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:26:11.390 ************************************ 00:26:11.390 END TEST bdev_raid 00:26:11.390 ************************************ 00:26:11.649 10:51:38 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:11.649 10:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:11.649 10:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:11.649 10:51:38 -- common/autotest_common.sh@10 -- # set +x 00:26:11.649 ************************************ 00:26:11.649 START TEST bdevperf_config 00:26:11.649 ************************************ 00:26:11.649 10:51:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:11.649 * Looking for test storage... 00:26:11.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:11.649 10:51:38 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:11.649 10:51:38 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:11.649 10:51:38 -- bdevperf/common.sh@9 -- # local rw=read 00:26:11.649 10:51:38 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:11.649 10:51:38 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:11.649 10:51:38 -- bdevperf/common.sh@13 -- # cat 00:26:11.649 00:26:11.649 10:51:38 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:11.649 10:51:38 -- bdevperf/common.sh@19 -- # echo 00:26:11.649 10:51:38 -- bdevperf/common.sh@20 -- # cat 00:26:11.649 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:11.649 10:51:38 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:11.649 10:51:38 -- bdevperf/common.sh@9 -- # local rw= 00:26:11.649 10:51:38 -- bdevperf/common.sh@10 -- # local filename= 00:26:11.649 10:51:38 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:11.649 10:51:38 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:11.649 10:51:38 -- bdevperf/common.sh@19 -- # echo 00:26:11.649 10:51:38 -- bdevperf/common.sh@20 -- # cat 00:26:11.649 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:11.649 10:51:38 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:11.649 10:51:38 -- bdevperf/common.sh@9 -- # local rw= 00:26:11.649 10:51:38 -- bdevperf/common.sh@10 -- # local filename= 00:26:11.649 10:51:38 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:11.649 10:51:38 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:11.649 10:51:38 -- bdevperf/common.sh@19 -- # echo 00:26:11.649 10:51:38 -- bdevperf/common.sh@20 -- # cat 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:11.649 10:51:38 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:11.649 10:51:38 -- bdevperf/common.sh@9 -- # local rw= 00:26:11.649 10:51:38 -- bdevperf/common.sh@10 -- # local filename= 00:26:11.649 10:51:38 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:11.649 10:51:38 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:11.649 10:51:38 -- bdevperf/common.sh@19 -- # echo 00:26:11.649 00:26:11.649 10:51:38 -- bdevperf/common.sh@20 -- # cat 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:11.649 10:51:38 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:11.649 10:51:38 -- bdevperf/common.sh@9 -- # local rw= 00:26:11.649 10:51:38 -- bdevperf/common.sh@10 -- # local filename= 00:26:11.649 10:51:38 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:11.649 10:51:38 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:11.649 10:51:38 -- bdevperf/common.sh@19 -- # echo 00:26:11.649 00:26:11.649 10:51:38 -- bdevperf/common.sh@20 -- # cat 00:26:11.649 10:51:38 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:14.945 10:51:40 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-24 10:51:38.256605] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:14.945 [2024-07-24 10:51:38.256915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143571 ] 00:26:14.945 Using job config with 4 jobs 00:26:14.945 [2024-07-24 10:51:38.405142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.945 [2024-07-24 10:51:38.503227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.945 cpumask for '\''job0'\'' is too big 00:26:14.945 cpumask for '\''job1'\'' is too big 00:26:14.945 cpumask for '\''job2'\'' is too big 00:26:14.945 cpumask for '\''job3'\'' is too big 00:26:14.945 Running I/O for 2 seconds... 00:26:14.945 00:26:14.945 Latency(us) 00:26:14.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.01 27122.11 26.49 0.00 0.00 9429.42 2055.45 15728.64 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27137.13 26.50 0.00 0.00 9401.11 2055.45 13285.93 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27115.56 26.48 0.00 0.00 9387.27 1742.66 11260.28 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27096.65 26.46 0.00 0.00 9375.32 1668.19 10664.49 00:26:14.945 =================================================================================================================== 00:26:14.945 Total : 108471.44 105.93 0.00 0.00 9398.25 1668.19 15728.64' 00:26:14.945 10:51:40 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-24 10:51:38.256605] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:14.945 [2024-07-24 10:51:38.256915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143571 ] 00:26:14.945 Using job config with 4 jobs 00:26:14.945 [2024-07-24 10:51:38.405142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.945 [2024-07-24 10:51:38.503227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.945 cpumask for '\''job0'\'' is too big 00:26:14.945 cpumask for '\''job1'\'' is too big 00:26:14.945 cpumask for '\''job2'\'' is too big 00:26:14.945 cpumask for '\''job3'\'' is too big 00:26:14.945 Running I/O for 2 seconds... 00:26:14.945 00:26:14.945 Latency(us) 00:26:14.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.01 27122.11 26.49 0.00 0.00 9429.42 2055.45 15728.64 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27137.13 26.50 0.00 0.00 9401.11 2055.45 13285.93 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27115.56 26.48 0.00 0.00 9387.27 1742.66 11260.28 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27096.65 26.46 0.00 0.00 9375.32 1668.19 10664.49 00:26:14.945 =================================================================================================================== 00:26:14.945 Total : 108471.44 105.93 0.00 0.00 9398.25 1668.19 15728.64' 00:26:14.945 10:51:40 -- bdevperf/common.sh@32 -- # echo '[2024-07-24 10:51:38.256605] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:14.945 [2024-07-24 10:51:38.256915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143571 ] 00:26:14.945 Using job config with 4 jobs 00:26:14.945 [2024-07-24 10:51:38.405142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.945 [2024-07-24 10:51:38.503227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.945 cpumask for '\''job0'\'' is too big 00:26:14.945 cpumask for '\''job1'\'' is too big 00:26:14.945 cpumask for '\''job2'\'' is too big 00:26:14.945 cpumask for '\''job3'\'' is too big 00:26:14.945 Running I/O for 2 seconds... 00:26:14.945 00:26:14.945 Latency(us) 00:26:14.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.01 27122.11 26.49 0.00 0.00 9429.42 2055.45 15728.64 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27137.13 26.50 0.00 0.00 9401.11 2055.45 13285.93 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27115.56 26.48 0.00 0.00 9387.27 1742.66 11260.28 00:26:14.945 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:14.945 Malloc0 : 2.02 27096.65 26.46 0.00 0.00 9375.32 1668.19 10664.49 00:26:14.945 =================================================================================================================== 00:26:14.945 Total : 108471.44 105.93 0.00 0.00 9398.25 1668.19 15728.64' 00:26:14.945 10:51:41 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:14.945 10:51:41 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:14.945 10:51:41 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:14.945 10:51:41 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:14.945 [2024-07-24 10:51:41.058425] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:14.945 [2024-07-24 10:51:41.058685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143612 ] 00:26:14.945 [2024-07-24 10:51:41.206502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.945 [2024-07-24 10:51:41.310576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.945 cpumask for 'job0' is too big 00:26:14.945 cpumask for 'job1' is too big 00:26:14.945 cpumask for 'job2' is too big 00:26:14.945 cpumask for 'job3' is too big 00:26:17.490 10:51:43 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:17.490 Running I/O for 2 seconds... 00:26:17.490 00:26:17.490 Latency(us) 00:26:17.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.490 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.490 Malloc0 : 2.01 27479.15 26.84 0.00 0.00 9306.50 2085.24 16205.27 00:26:17.490 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.490 Malloc0 : 2.02 27469.77 26.83 0.00 0.00 9287.60 1861.82 14298.76 00:26:17.490 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.490 Malloc0 : 2.02 27446.98 26.80 0.00 0.00 9275.32 1891.61 12571.00 00:26:17.490 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:17.490 Malloc0 : 2.03 27425.26 26.78 0.00 0.00 9261.87 1854.37 11081.54 00:26:17.490 =================================================================================================================== 00:26:17.490 Total : 109821.16 107.25 0.00 0.00 9282.79 1854.37 16205.27' 00:26:17.490 10:51:43 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:17.490 10:51:43 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:17.490 10:51:43 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:17.490 10:51:43 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:17.490 10:51:43 -- bdevperf/common.sh@9 -- # local rw=write 00:26:17.490 10:51:43 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:17.490 10:51:43 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:17.490 10:51:43 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:17.490 10:51:43 -- bdevperf/common.sh@19 -- # echo 00:26:17.490 00:26:17.490 10:51:43 -- bdevperf/common.sh@20 -- # cat 00:26:17.490 10:51:43 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:17.490 10:51:43 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:17.490 10:51:43 -- bdevperf/common.sh@9 -- # local rw=write 00:26:17.490 10:51:43 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:17.490 10:51:43 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:17.490 10:51:43 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:17.490 10:51:43 -- bdevperf/common.sh@19 -- # echo 00:26:17.490 00:26:17.490 10:51:43 -- bdevperf/common.sh@20 -- # cat 00:26:17.490 10:51:43 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:17.490 10:51:43 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:17.490 10:51:43 -- bdevperf/common.sh@9 -- # local rw=write 00:26:17.490 10:51:43 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:17.490 10:51:43 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:17.490 00:26:17.490 10:51:43 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:17.490 10:51:43 -- bdevperf/common.sh@19 -- # echo 00:26:17.490 10:51:43 -- bdevperf/common.sh@20 -- # cat 00:26:17.490 10:51:43 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:20.023 10:51:46 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-24 10:51:43.887141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:20.024 [2024-07-24 10:51:43.888139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143651 ] 00:26:20.024 Using job config with 3 jobs 00:26:20.024 [2024-07-24 10:51:44.035344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.024 [2024-07-24 10:51:44.143152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.024 cpumask for '\''job0'\'' is too big 00:26:20.024 cpumask for '\''job1'\'' is too big 00:26:20.024 cpumask for '\''job2'\'' is too big 00:26:20.024 Running I/O for 2 seconds... 00:26:20.024 00:26:20.024 Latency(us) 00:26:20.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.01 36821.63 35.96 0.00 0.00 6944.95 1638.40 10902.81 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.02 36834.93 35.97 0.00 0.00 6929.15 1586.27 9175.04 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.02 36806.21 35.94 0.00 0.00 6919.24 1854.37 8698.41 00:26:20.024 =================================================================================================================== 00:26:20.024 Total : 110462.77 107.87 0.00 0.00 6931.10 1586.27 10902.81' 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-24 10:51:43.887141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:20.024 [2024-07-24 10:51:43.888139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143651 ] 00:26:20.024 Using job config with 3 jobs 00:26:20.024 [2024-07-24 10:51:44.035344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.024 [2024-07-24 10:51:44.143152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.024 cpumask for '\''job0'\'' is too big 00:26:20.024 cpumask for '\''job1'\'' is too big 00:26:20.024 cpumask for '\''job2'\'' is too big 00:26:20.024 Running I/O for 2 seconds... 00:26:20.024 00:26:20.024 Latency(us) 00:26:20.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.01 36821.63 35.96 0.00 0.00 6944.95 1638.40 10902.81 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.02 36834.93 35.97 0.00 0.00 6929.15 1586.27 9175.04 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.02 36806.21 35.94 0.00 0.00 6919.24 1854.37 8698.41 00:26:20.024 =================================================================================================================== 00:26:20.024 Total : 110462.77 107.87 0.00 0.00 6931.10 1586.27 10902.81' 00:26:20.024 10:51:46 -- bdevperf/common.sh@32 -- # echo '[2024-07-24 10:51:43.887141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:20.024 [2024-07-24 10:51:43.888139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143651 ] 00:26:20.024 Using job config with 3 jobs 00:26:20.024 [2024-07-24 10:51:44.035344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.024 [2024-07-24 10:51:44.143152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.024 cpumask for '\''job0'\'' is too big 00:26:20.024 cpumask for '\''job1'\'' is too big 00:26:20.024 cpumask for '\''job2'\'' is too big 00:26:20.024 Running I/O for 2 seconds... 00:26:20.024 00:26:20.024 Latency(us) 00:26:20.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.01 36821.63 35.96 0.00 0.00 6944.95 1638.40 10902.81 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.02 36834.93 35.97 0.00 0.00 6929.15 1586.27 9175.04 00:26:20.024 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:20.024 Malloc0 : 2.02 36806.21 35.94 0.00 0.00 6919.24 1854.37 8698.41 00:26:20.024 =================================================================================================================== 00:26:20.024 Total : 110462.77 107.87 0.00 0.00 6931.10 1586.27 10902.81' 00:26:20.024 10:51:46 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:20.024 10:51:46 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:20.024 10:51:46 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:20.024 10:51:46 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:20.024 10:51:46 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:20.024 10:51:46 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:20.024 10:51:46 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:20.024 10:51:46 -- bdevperf/common.sh@13 -- # cat 00:26:20.024 10:51:46 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:20.024 00:26:20.024 10:51:46 -- bdevperf/common.sh@19 -- # echo 00:26:20.024 10:51:46 -- bdevperf/common.sh@20 -- # cat 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:20.024 10:51:46 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:20.024 10:51:46 -- bdevperf/common.sh@9 -- # local rw= 00:26:20.024 10:51:46 -- bdevperf/common.sh@10 -- # local filename= 00:26:20.024 10:51:46 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:20.024 10:51:46 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:20.024 00:26:20.024 10:51:46 -- bdevperf/common.sh@19 -- # echo 00:26:20.024 10:51:46 -- bdevperf/common.sh@20 -- # cat 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:20.024 10:51:46 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:20.024 10:51:46 -- bdevperf/common.sh@9 -- # local rw= 00:26:20.024 10:51:46 -- bdevperf/common.sh@10 -- # local filename= 00:26:20.024 10:51:46 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:20.024 10:51:46 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:20.024 00:26:20.024 10:51:46 -- bdevperf/common.sh@19 -- # echo 00:26:20.024 10:51:46 -- bdevperf/common.sh@20 -- # cat 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:20.024 10:51:46 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:20.024 10:51:46 -- bdevperf/common.sh@9 -- # local rw= 00:26:20.024 10:51:46 -- bdevperf/common.sh@10 -- # local filename= 00:26:20.024 10:51:46 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:20.024 10:51:46 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:20.024 00:26:20.024 10:51:46 -- bdevperf/common.sh@19 -- # echo 00:26:20.024 10:51:46 -- bdevperf/common.sh@20 -- # cat 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:20.024 10:51:46 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:20.024 10:51:46 -- bdevperf/common.sh@9 -- # local rw= 00:26:20.024 10:51:46 -- bdevperf/common.sh@10 -- # local filename= 00:26:20.024 10:51:46 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:20.024 10:51:46 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:20.024 00:26:20.024 10:51:46 -- bdevperf/common.sh@19 -- # echo 00:26:20.024 10:51:46 -- bdevperf/common.sh@20 -- # cat 00:26:20.024 10:51:46 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:23.315 10:51:49 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-24 10:51:46.709515] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:23.315 [2024-07-24 10:51:46.709740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143690 ] 00:26:23.315 Using job config with 4 jobs 00:26:23.315 [2024-07-24 10:51:46.848180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.315 [2024-07-24 10:51:46.963946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.315 cpumask for '\''job0'\'' is too big 00:26:23.315 cpumask for '\''job1'\'' is too big 00:26:23.315 cpumask for '\''job2'\'' is too big 00:26:23.315 cpumask for '\''job3'\'' is too big 00:26:23.315 Running I/O for 2 seconds... 00:26:23.315 00:26:23.315 Latency(us) 00:26:23.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13446.41 13.13 0.00 0.00 19022.47 3872.58 32410.53 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.04 13435.10 13.12 0.00 0.00 19020.16 4349.21 32410.53 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13424.44 13.11 0.00 0.00 18973.97 4289.63 28001.75 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.04 13414.55 13.10 0.00 0.00 18966.96 4944.99 28001.75 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13404.42 13.09 0.00 0.00 18914.51 3902.37 24069.59 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.05 13393.68 13.08 0.00 0.00 18908.44 4587.52 23950.43 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.05 13381.76 13.07 0.00 0.00 18860.50 3693.85 21805.61 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.05 13369.95 13.06 0.00 0.00 18860.07 4319.42 21686.46 00:26:23.315 =================================================================================================================== 00:26:23.315 Total : 107270.31 104.76 0.00 0.00 18940.88 3693.85 32410.53' 00:26:23.315 10:51:49 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-24 10:51:46.709515] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:23.315 [2024-07-24 10:51:46.709740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143690 ] 00:26:23.315 Using job config with 4 jobs 00:26:23.315 [2024-07-24 10:51:46.848180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.315 [2024-07-24 10:51:46.963946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.315 cpumask for '\''job0'\'' is too big 00:26:23.315 cpumask for '\''job1'\'' is too big 00:26:23.315 cpumask for '\''job2'\'' is too big 00:26:23.315 cpumask for '\''job3'\'' is too big 00:26:23.315 Running I/O for 2 seconds... 00:26:23.315 00:26:23.315 Latency(us) 00:26:23.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13446.41 13.13 0.00 0.00 19022.47 3872.58 32410.53 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.04 13435.10 13.12 0.00 0.00 19020.16 4349.21 32410.53 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13424.44 13.11 0.00 0.00 18973.97 4289.63 28001.75 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.04 13414.55 13.10 0.00 0.00 18966.96 4944.99 28001.75 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13404.42 13.09 0.00 0.00 18914.51 3902.37 24069.59 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.05 13393.68 13.08 0.00 0.00 18908.44 4587.52 23950.43 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.05 13381.76 13.07 0.00 0.00 18860.50 3693.85 21805.61 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.05 13369.95 13.06 0.00 0.00 18860.07 4319.42 21686.46 00:26:23.315 =================================================================================================================== 00:26:23.315 Total : 107270.31 104.76 0.00 0.00 18940.88 3693.85 32410.53' 00:26:23.315 10:51:49 -- bdevperf/common.sh@32 -- # echo '[2024-07-24 10:51:46.709515] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:23.315 [2024-07-24 10:51:46.709740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143690 ] 00:26:23.315 Using job config with 4 jobs 00:26:23.315 [2024-07-24 10:51:46.848180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.315 [2024-07-24 10:51:46.963946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.315 cpumask for '\''job0'\'' is too big 00:26:23.315 cpumask for '\''job1'\'' is too big 00:26:23.315 cpumask for '\''job2'\'' is too big 00:26:23.315 cpumask for '\''job3'\'' is too big 00:26:23.315 Running I/O for 2 seconds... 00:26:23.315 00:26:23.315 Latency(us) 00:26:23.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc0 : 2.04 13446.41 13.13 0.00 0.00 19022.47 3872.58 32410.53 00:26:23.315 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.315 Malloc1 : 2.04 13435.10 13.12 0.00 0.00 19020.16 4349.21 32410.53 00:26:23.315 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.316 Malloc0 : 2.04 13424.44 13.11 0.00 0.00 18973.97 4289.63 28001.75 00:26:23.316 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.316 Malloc1 : 2.04 13414.55 13.10 0.00 0.00 18966.96 4944.99 28001.75 00:26:23.316 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.316 Malloc0 : 2.04 13404.42 13.09 0.00 0.00 18914.51 3902.37 24069.59 00:26:23.316 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.316 Malloc1 : 2.05 13393.68 13.08 0.00 0.00 18908.44 4587.52 23950.43 00:26:23.316 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.316 Malloc0 : 2.05 13381.76 13.07 0.00 0.00 18860.50 3693.85 21805.61 00:26:23.316 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:23.316 Malloc1 : 2.05 13369.95 13.06 0.00 0.00 18860.07 4319.42 21686.46 00:26:23.316 =================================================================================================================== 00:26:23.316 Total : 107270.31 104.76 0.00 0.00 18940.88 3693.85 32410.53' 00:26:23.316 10:51:49 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:23.316 10:51:49 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:23.316 10:51:49 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:23.316 10:51:49 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:23.316 10:51:49 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:23.316 10:51:49 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:23.316 ************************************ 00:26:23.316 END TEST bdevperf_config 00:26:23.316 ************************************ 00:26:23.316 00:26:23.316 real 0m11.407s 00:26:23.316 user 0m9.845s 00:26:23.316 sys 0m1.003s 00:26:23.316 10:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:23.316 10:51:49 -- common/autotest_common.sh@10 -- # set +x 00:26:23.316 10:51:49 -- spdk/autotest.sh@198 -- # uname -s 00:26:23.316 10:51:49 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:23.316 10:51:49 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:23.316 10:51:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:23.316 10:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:23.316 10:51:49 -- common/autotest_common.sh@10 -- # set +x 00:26:23.316 ************************************ 00:26:23.316 START TEST reactor_set_interrupt 00:26:23.316 ************************************ 00:26:23.316 10:51:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:23.316 * Looking for test storage... 00:26:23.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.316 10:51:49 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:23.316 10:51:49 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:23.316 10:51:49 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.316 10:51:49 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.316 10:51:49 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:23.316 10:51:49 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:23.316 10:51:49 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:23.316 10:51:49 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:23.316 10:51:49 -- common/autotest_common.sh@34 -- # set -e 00:26:23.316 10:51:49 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:23.316 10:51:49 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:23.316 10:51:49 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:23.316 10:51:49 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:23.316 10:51:49 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:23.316 10:51:49 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:23.316 10:51:49 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:23.316 10:51:49 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:23.316 10:51:49 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:23.316 10:51:49 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:23.316 10:51:49 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:23.316 10:51:49 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:23.316 10:51:49 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:23.316 10:51:49 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:23.316 10:51:49 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:23.316 10:51:49 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:23.316 10:51:49 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:23.316 10:51:49 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:23.316 10:51:49 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:23.316 10:51:49 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:23.316 10:51:49 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:23.316 10:51:49 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:23.316 10:51:49 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:23.316 10:51:49 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:23.316 10:51:49 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:23.316 10:51:49 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:23.316 10:51:49 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:23.316 10:51:49 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:23.316 10:51:49 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:23.316 10:51:49 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:23.316 10:51:49 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:23.316 10:51:49 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:23.316 10:51:49 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:23.316 10:51:49 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:23.316 10:51:49 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:23.316 10:51:49 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:23.316 10:51:49 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:23.316 10:51:49 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:23.316 10:51:49 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:23.316 10:51:49 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:26:23.316 10:51:49 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:23.316 10:51:49 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:23.316 10:51:49 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:23.316 10:51:49 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:23.316 10:51:49 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:26:23.316 10:51:49 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:23.316 10:51:49 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:23.316 10:51:49 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:23.316 10:51:49 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:23.316 10:51:49 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:23.316 10:51:49 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:23.316 10:51:49 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:23.316 10:51:49 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:23.316 10:51:49 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:23.316 10:51:49 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:23.316 10:51:49 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:23.316 10:51:49 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:23.316 10:51:49 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:23.316 10:51:49 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:23.316 10:51:49 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:23.317 10:51:49 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:23.317 10:51:49 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:23.317 10:51:49 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:23.317 10:51:49 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:23.317 10:51:49 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:23.317 10:51:49 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:23.317 10:51:49 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:23.317 10:51:49 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:23.317 10:51:49 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:23.317 10:51:49 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:23.317 10:51:49 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:23.317 10:51:49 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:23.317 10:51:49 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:23.317 10:51:49 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:23.317 10:51:49 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:23.317 10:51:49 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:23.317 10:51:49 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:23.317 10:51:49 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:23.317 10:51:49 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:23.317 10:51:49 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:23.317 10:51:49 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:23.317 10:51:49 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:23.317 10:51:49 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:23.317 10:51:49 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:23.317 10:51:49 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:23.317 10:51:49 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:23.317 10:51:49 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:23.317 10:51:49 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:23.317 10:51:49 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:23.317 10:51:49 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:23.317 10:51:49 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:23.317 10:51:49 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:23.317 10:51:49 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:23.317 10:51:49 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:23.317 10:51:49 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:23.317 10:51:49 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:23.317 10:51:49 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:23.317 10:51:49 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:23.317 10:51:49 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:23.317 #define SPDK_CONFIG_H 00:26:23.317 #define SPDK_CONFIG_APPS 1 00:26:23.317 #define SPDK_CONFIG_ARCH native 00:26:23.317 #define SPDK_CONFIG_ASAN 1 00:26:23.317 #undef SPDK_CONFIG_AVAHI 00:26:23.317 #undef SPDK_CONFIG_CET 00:26:23.317 #define SPDK_CONFIG_COVERAGE 1 00:26:23.317 #define SPDK_CONFIG_CROSS_PREFIX 00:26:23.317 #undef SPDK_CONFIG_CRYPTO 00:26:23.317 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:23.317 #undef SPDK_CONFIG_CUSTOMOCF 00:26:23.317 #undef SPDK_CONFIG_DAOS 00:26:23.317 #define SPDK_CONFIG_DAOS_DIR 00:26:23.317 #define SPDK_CONFIG_DEBUG 1 00:26:23.317 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:23.317 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:26:23.317 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:26:23.317 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:26:23.317 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:23.317 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:23.317 #define SPDK_CONFIG_EXAMPLES 1 00:26:23.317 #undef SPDK_CONFIG_FC 00:26:23.317 #define SPDK_CONFIG_FC_PATH 00:26:23.317 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:23.317 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:23.317 #undef SPDK_CONFIG_FUSE 00:26:23.317 #undef SPDK_CONFIG_FUZZER 00:26:23.317 #define SPDK_CONFIG_FUZZER_LIB 00:26:23.317 #undef SPDK_CONFIG_GOLANG 00:26:23.317 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:23.317 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:23.317 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:23.317 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:23.317 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:23.317 #define SPDK_CONFIG_IDXD 1 00:26:23.317 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:23.317 #undef SPDK_CONFIG_IPSEC_MB 00:26:23.317 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:23.317 #define SPDK_CONFIG_ISAL 1 00:26:23.317 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:23.317 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:23.317 #define SPDK_CONFIG_LIBDIR 00:26:23.317 #undef SPDK_CONFIG_LTO 00:26:23.317 #define SPDK_CONFIG_MAX_LCORES 00:26:23.317 #define SPDK_CONFIG_NVME_CUSE 1 00:26:23.317 #undef SPDK_CONFIG_OCF 00:26:23.317 #define SPDK_CONFIG_OCF_PATH 00:26:23.317 #define SPDK_CONFIG_OPENSSL_PATH 00:26:23.317 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:23.317 #undef SPDK_CONFIG_PGO_USE 00:26:23.317 #define SPDK_CONFIG_PREFIX /usr/local 00:26:23.317 #define SPDK_CONFIG_RAID5F 1 00:26:23.317 #undef SPDK_CONFIG_RBD 00:26:23.317 #define SPDK_CONFIG_RDMA 1 00:26:23.317 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:23.317 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:23.317 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:23.317 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:23.317 #undef SPDK_CONFIG_SHARED 00:26:23.317 #undef SPDK_CONFIG_SMA 00:26:23.317 #define SPDK_CONFIG_TESTS 1 00:26:23.317 #undef SPDK_CONFIG_TSAN 00:26:23.317 #undef SPDK_CONFIG_UBLK 00:26:23.317 #define SPDK_CONFIG_UBSAN 1 00:26:23.317 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:23.317 #undef SPDK_CONFIG_URING 00:26:23.317 #define SPDK_CONFIG_URING_PATH 00:26:23.317 #undef SPDK_CONFIG_URING_ZNS 00:26:23.317 #undef SPDK_CONFIG_USDT 00:26:23.317 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:23.317 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:23.317 #undef SPDK_CONFIG_VFIO_USER 00:26:23.317 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:23.317 #define SPDK_CONFIG_VHOST 1 00:26:23.317 #define SPDK_CONFIG_VIRTIO 1 00:26:23.317 #undef SPDK_CONFIG_VTUNE 00:26:23.317 #define SPDK_CONFIG_VTUNE_DIR 00:26:23.317 #define SPDK_CONFIG_WERROR 1 00:26:23.317 #define SPDK_CONFIG_WPDK_DIR 00:26:23.317 #undef SPDK_CONFIG_XNVME 00:26:23.317 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:23.317 10:51:49 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:23.317 10:51:49 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.317 10:51:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.317 10:51:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.317 10:51:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.317 10:51:49 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:23.318 10:51:49 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:23.318 10:51:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:23.318 10:51:49 -- paths/export.sh@5 -- # export PATH 00:26:23.318 10:51:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:23.318 10:51:49 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:23.318 10:51:49 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:23.318 10:51:49 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:23.318 10:51:49 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:23.318 10:51:49 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:23.318 10:51:49 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:23.318 10:51:49 -- pm/common@16 -- # TEST_TAG=N/A 00:26:23.318 10:51:49 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:23.318 10:51:49 -- common/autotest_common.sh@52 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:23.318 10:51:49 -- common/autotest_common.sh@56 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:23.318 10:51:49 -- common/autotest_common.sh@58 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:23.318 10:51:49 -- common/autotest_common.sh@60 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:23.318 10:51:49 -- common/autotest_common.sh@62 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:23.318 10:51:49 -- common/autotest_common.sh@64 -- # : 00:26:23.318 10:51:49 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:23.318 10:51:49 -- common/autotest_common.sh@66 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:23.318 10:51:49 -- common/autotest_common.sh@68 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:23.318 10:51:49 -- common/autotest_common.sh@70 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:23.318 10:51:49 -- common/autotest_common.sh@72 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:23.318 10:51:49 -- common/autotest_common.sh@74 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:23.318 10:51:49 -- common/autotest_common.sh@76 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:23.318 10:51:49 -- common/autotest_common.sh@78 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:23.318 10:51:49 -- common/autotest_common.sh@80 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:23.318 10:51:49 -- common/autotest_common.sh@82 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:23.318 10:51:49 -- common/autotest_common.sh@84 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:23.318 10:51:49 -- common/autotest_common.sh@86 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:23.318 10:51:49 -- common/autotest_common.sh@88 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:23.318 10:51:49 -- common/autotest_common.sh@90 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:23.318 10:51:49 -- common/autotest_common.sh@92 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:23.318 10:51:49 -- common/autotest_common.sh@94 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:23.318 10:51:49 -- common/autotest_common.sh@96 -- # : rdma 00:26:23.318 10:51:49 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:23.318 10:51:49 -- common/autotest_common.sh@98 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:23.318 10:51:49 -- common/autotest_common.sh@100 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:23.318 10:51:49 -- common/autotest_common.sh@102 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:23.318 10:51:49 -- common/autotest_common.sh@104 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:23.318 10:51:49 -- common/autotest_common.sh@106 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:23.318 10:51:49 -- common/autotest_common.sh@108 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:23.318 10:51:49 -- common/autotest_common.sh@110 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:23.318 10:51:49 -- common/autotest_common.sh@112 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:23.318 10:51:49 -- common/autotest_common.sh@114 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:23.318 10:51:49 -- common/autotest_common.sh@116 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:23.318 10:51:49 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:26:23.318 10:51:49 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:23.318 10:51:49 -- common/autotest_common.sh@120 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:23.318 10:51:49 -- common/autotest_common.sh@122 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:23.318 10:51:49 -- common/autotest_common.sh@124 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:23.318 10:51:49 -- common/autotest_common.sh@126 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:23.318 10:51:49 -- common/autotest_common.sh@128 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:23.318 10:51:49 -- common/autotest_common.sh@130 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:23.318 10:51:49 -- common/autotest_common.sh@132 -- # : v22.11.4 00:26:23.318 10:51:49 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:23.318 10:51:49 -- common/autotest_common.sh@134 -- # : true 00:26:23.318 10:51:49 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:23.318 10:51:49 -- common/autotest_common.sh@136 -- # : 1 00:26:23.318 10:51:49 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:23.318 10:51:49 -- common/autotest_common.sh@138 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:23.318 10:51:49 -- common/autotest_common.sh@140 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:23.318 10:51:49 -- common/autotest_common.sh@142 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:23.318 10:51:49 -- common/autotest_common.sh@144 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:23.318 10:51:49 -- common/autotest_common.sh@146 -- # : 0 00:26:23.318 10:51:49 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:23.318 10:51:49 -- common/autotest_common.sh@148 -- # : 00:26:23.318 10:51:49 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:23.319 10:51:49 -- common/autotest_common.sh@150 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:23.319 10:51:49 -- common/autotest_common.sh@152 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:23.319 10:51:49 -- common/autotest_common.sh@154 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:23.319 10:51:49 -- common/autotest_common.sh@156 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:23.319 10:51:49 -- common/autotest_common.sh@158 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:23.319 10:51:49 -- common/autotest_common.sh@160 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:23.319 10:51:49 -- common/autotest_common.sh@163 -- # : 00:26:23.319 10:51:49 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:23.319 10:51:49 -- common/autotest_common.sh@165 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:23.319 10:51:49 -- common/autotest_common.sh@167 -- # : 0 00:26:23.319 10:51:49 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:23.319 10:51:49 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:23.319 10:51:49 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:23.319 10:51:49 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:23.319 10:51:49 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:23.319 10:51:49 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:23.319 10:51:49 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:23.319 10:51:49 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:23.319 10:51:49 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:23.319 10:51:49 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:23.319 10:51:49 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:23.319 10:51:49 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:23.319 10:51:49 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:23.319 10:51:49 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:23.319 10:51:49 -- common/autotest_common.sh@196 -- # cat 00:26:23.319 10:51:49 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:23.319 10:51:49 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:23.319 10:51:49 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:23.319 10:51:49 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:23.319 10:51:49 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:23.319 10:51:49 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:23.319 10:51:49 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:23.319 10:51:49 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:23.319 10:51:49 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:23.319 10:51:49 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:23.319 10:51:49 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:23.319 10:51:49 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:23.319 10:51:49 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:23.319 10:51:49 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:23.319 10:51:49 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:23.319 10:51:49 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:23.319 10:51:49 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:23.319 10:51:49 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:23.319 10:51:49 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:23.319 10:51:49 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:23.319 10:51:49 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:23.319 10:51:49 -- common/autotest_common.sh@249 -- # valgrind= 00:26:23.319 10:51:49 -- common/autotest_common.sh@255 -- # uname -s 00:26:23.319 10:51:49 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:23.319 10:51:49 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:23.319 10:51:49 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:23.319 10:51:49 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:23.319 10:51:49 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:23.319 10:51:49 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:23.319 10:51:49 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:23.319 10:51:49 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:23.319 10:51:49 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:23.319 10:51:49 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:23.319 10:51:49 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:23.319 10:51:49 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:23.319 10:51:49 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:23.319 10:51:49 -- common/autotest_common.sh@309 -- # [[ -z 143771 ]] 00:26:23.319 10:51:49 -- common/autotest_common.sh@309 -- # kill -0 143771 00:26:23.319 10:51:49 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:23.319 10:51:49 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:23.319 10:51:49 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:23.319 10:51:49 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:23.319 10:51:49 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:23.319 10:51:49 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:23.319 10:51:49 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:23.319 10:51:49 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:23.319 10:51:49 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.58msAQ 00:26:23.319 10:51:49 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:23.319 10:51:49 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:23.319 10:51:49 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:23.319 10:51:49 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.58msAQ/tests/interrupt /tmp/spdk.58msAQ 00:26:23.319 10:51:49 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@318 -- # df -T 00:26:23.320 10:51:49 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=9443672064 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=11156344832 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267146240 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:26:23.320 10:51:49 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # avails["$mount"]=93729751040 00:26:23.320 10:51:49 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:23.320 10:51:49 -- common/autotest_common.sh@354 -- # uses["$mount"]=5973028864 00:26:23.320 10:51:49 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:23.320 10:51:49 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:23.320 * Looking for test storage... 00:26:23.320 10:51:49 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:23.320 10:51:49 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:23.320 10:51:49 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:23.320 10:51:49 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.320 10:51:49 -- common/autotest_common.sh@363 -- # mount=/ 00:26:23.320 10:51:49 -- common/autotest_common.sh@365 -- # target_space=9443672064 00:26:23.320 10:51:49 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:23.320 10:51:49 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:23.320 10:51:49 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:23.320 10:51:49 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:23.320 10:51:49 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:23.320 10:51:49 -- common/autotest_common.sh@372 -- # new_size=13370937344 00:26:23.320 10:51:49 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:23.320 10:51:49 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.320 10:51:49 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.320 10:51:49 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:23.320 10:51:49 -- common/autotest_common.sh@380 -- # return 0 00:26:23.320 10:51:49 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:23.320 10:51:49 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:23.320 10:51:49 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:23.320 10:51:49 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:23.320 10:51:49 -- common/autotest_common.sh@1672 -- # true 00:26:23.320 10:51:49 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:23.320 10:51:49 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:23.320 10:51:49 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:23.320 10:51:49 -- common/autotest_common.sh@27 -- # exec 00:26:23.320 10:51:49 -- common/autotest_common.sh@29 -- # exec 00:26:23.320 10:51:49 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:23.320 10:51:49 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:23.320 10:51:49 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:23.320 10:51:49 -- common/autotest_common.sh@18 -- # set -x 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:23.320 10:51:49 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:23.320 10:51:49 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:23.320 10:51:49 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143820 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:23.320 10:51:49 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143820 /var/tmp/spdk.sock 00:26:23.320 10:51:49 -- common/autotest_common.sh@819 -- # '[' -z 143820 ']' 00:26:23.320 10:51:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.320 10:51:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:23.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.320 10:51:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.320 10:51:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:23.320 10:51:49 -- common/autotest_common.sh@10 -- # set +x 00:26:23.320 [2024-07-24 10:51:49.878396] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:23.321 [2024-07-24 10:51:49.878637] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143820 ] 00:26:23.580 [2024-07-24 10:51:50.036526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:23.580 [2024-07-24 10:51:50.128406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.580 [2024-07-24 10:51:50.128515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:23.580 [2024-07-24 10:51:50.128768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.580 [2024-07-24 10:51:50.212886] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:24.516 10:51:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:24.516 10:51:50 -- common/autotest_common.sh@852 -- # return 0 00:26:24.516 10:51:50 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:24.516 10:51:50 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:24.774 Malloc0 00:26:24.774 Malloc1 00:26:24.774 Malloc2 00:26:24.774 10:51:51 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:24.774 10:51:51 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:24.774 10:51:51 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:24.774 10:51:51 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:24.774 5000+0 records in 00:26:24.774 5000+0 records out 00:26:24.774 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0245641 s, 417 MB/s 00:26:24.774 10:51:51 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:25.033 AIO0 00:26:25.033 10:51:51 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 143820 00:26:25.033 10:51:51 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 143820 without_thd 00:26:25.033 10:51:51 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143820 00:26:25.033 10:51:51 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:25.033 10:51:51 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:25.033 10:51:51 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:25.033 10:51:51 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:25.033 10:51:51 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:25.033 10:51:51 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:25.033 10:51:51 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:25.033 10:51:51 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:25.033 10:51:51 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:25.292 10:51:51 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:25.292 10:51:51 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:25.292 10:51:51 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:25.551 10:51:52 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:25.551 10:51:52 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:25.551 spdk_thread ids are 1 on reactor0. 00:26:25.551 10:51:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:25.551 10:51:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143820 0 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143820 0 idle 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:25.551 10:51:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143820 root 20 0 20.1t 57772 25728 S 0.0 0.5 0:00.35 reactor_0' 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # echo 143820 root 20 0 20.1t 57772 25728 S 0.0 0.5 0:00.35 reactor_0 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:25.810 10:51:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:25.810 10:51:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143820 1 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143820 1 idle 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143823 root 20 0 20.1t 57772 25728 S 0.0 0.5 0:00.00 reactor_1' 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # echo 143823 root 20 0 20.1t 57772 25728 S 0.0 0.5 0:00.00 reactor_1 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:25.810 10:51:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:25.810 10:51:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143820 2 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143820 2 idle 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:25.810 10:51:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:25.811 10:51:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:25.811 10:51:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143824 root 20 0 20.1t 57772 25728 S 0.0 0.5 0:00.00 reactor_2' 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@48 -- # echo 143824 root 20 0 20.1t 57772 25728 S 0.0 0.5 0:00.00 reactor_2 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:26.069 10:51:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:26.069 10:51:52 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:26.069 10:51:52 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:26.069 10:51:52 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:26.328 [2024-07-24 10:51:52.909622] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:26.328 10:51:52 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:26.587 [2024-07-24 10:51:53.173339] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:26.587 [2024-07-24 10:51:53.174299] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:26.587 10:51:53 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:26.851 [2024-07-24 10:51:53.437183] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:26.851 [2024-07-24 10:51:53.437765] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:26.851 10:51:53 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:26.851 10:51:53 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143820 0 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143820 0 busy 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:26.851 10:51:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:27.228 10:51:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143820 root 20 0 20.1t 57932 25728 R 93.8 0.5 0:00.80 reactor_0' 00:26:27.228 10:51:53 -- interrupt/interrupt_common.sh@48 -- # echo 143820 root 20 0 20.1t 57932 25728 R 93.8 0.5 0:00.80 reactor_0 00:26:27.228 10:51:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:27.228 10:51:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.8 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:27.229 10:51:53 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:27.229 10:51:53 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143820 2 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143820 2 busy 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143824 root 20 0 20.1t 57932 25728 R 99.9 0.5 0:00.35 reactor_2' 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@48 -- # echo 143824 root 20 0 20.1t 57932 25728 R 99.9 0.5 0:00.35 reactor_2 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:27.229 10:51:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:27.229 10:51:53 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:27.507 [2024-07-24 10:51:54.021153] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:27.507 [2024-07-24 10:51:54.021925] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:27.507 10:51:54 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:27.507 10:51:54 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143820 2 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143820 2 idle 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:27.507 10:51:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143824 root 20 0 20.1t 57980 25728 S 0.0 0.5 0:00.58 reactor_2' 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@48 -- # echo 143824 root 20 0 20.1t 57980 25728 S 0.0 0.5 0:00.58 reactor_2 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:27.765 10:51:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:27.765 10:51:54 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:28.022 [2024-07-24 10:51:54.453219] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:28.022 [2024-07-24 10:51:54.453909] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:28.022 10:51:54 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:28.022 10:51:54 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:28.022 10:51:54 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:28.022 [2024-07-24 10:51:54.693601] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:28.281 10:51:54 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143820 0 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143820 0 idle 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@33 -- # local pid=143820 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143820 -w 256 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143820 root 20 0 20.1t 58080 25728 S 0.0 0.5 0:01.64 reactor_0' 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@48 -- # echo 143820 root 20 0 20.1t 58080 25728 S 0.0 0.5 0:01.64 reactor_0 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:28.281 10:51:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:28.281 10:51:54 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:28.281 10:51:54 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:28.281 10:51:54 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:28.281 10:51:54 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 143820 00:26:28.281 10:51:54 -- common/autotest_common.sh@926 -- # '[' -z 143820 ']' 00:26:28.281 10:51:54 -- common/autotest_common.sh@930 -- # kill -0 143820 00:26:28.281 10:51:54 -- common/autotest_common.sh@931 -- # uname 00:26:28.281 10:51:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:28.281 10:51:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143820 00:26:28.281 10:51:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:28.281 10:51:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:28.281 killing process with pid 143820 00:26:28.281 10:51:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143820' 00:26:28.281 10:51:54 -- common/autotest_common.sh@945 -- # kill 143820 00:26:28.281 10:51:54 -- common/autotest_common.sh@950 -- # wait 143820 00:26:28.539 10:51:55 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:28.539 10:51:55 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=143960 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 143960 /var/tmp/spdk.sock 00:26:28.539 10:51:55 -- common/autotest_common.sh@819 -- # '[' -z 143960 ']' 00:26:28.539 10:51:55 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:28.539 10:51:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.797 10:51:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:28.797 10:51:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.797 10:51:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:28.797 10:51:55 -- common/autotest_common.sh@10 -- # set +x 00:26:28.797 [2024-07-24 10:51:55.264694] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:28.797 [2024-07-24 10:51:55.264935] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143960 ] 00:26:28.797 [2024-07-24 10:51:55.419572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:29.056 [2024-07-24 10:51:55.518742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.056 [2024-07-24 10:51:55.518856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:29.056 [2024-07-24 10:51:55.518875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.056 [2024-07-24 10:51:55.608961] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:29.623 10:51:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:29.623 10:51:56 -- common/autotest_common.sh@852 -- # return 0 00:26:29.623 10:51:56 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:29.623 10:51:56 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:29.882 Malloc0 00:26:29.882 Malloc1 00:26:29.882 Malloc2 00:26:29.882 10:51:56 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:29.882 10:51:56 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:29.882 10:51:56 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:29.882 10:51:56 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:29.882 5000+0 records in 00:26:29.883 5000+0 records out 00:26:29.883 10240000 bytes (10 MB, 9.8 MiB) copied, 0.023506 s, 436 MB/s 00:26:29.883 10:51:56 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:30.142 AIO0 00:26:30.142 10:51:56 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 143960 00:26:30.142 10:51:56 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 143960 00:26:30.142 10:51:56 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=143960 00:26:30.142 10:51:56 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:30.142 10:51:56 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:30.142 10:51:56 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:30.142 10:51:56 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:30.142 10:51:56 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:30.142 10:51:56 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:30.142 10:51:56 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:30.142 10:51:56 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:30.142 10:51:56 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:30.401 10:51:57 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:30.401 10:51:57 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:30.401 10:51:57 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:30.659 10:51:57 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:30.659 spdk_thread ids are 1 on reactor0. 00:26:30.659 10:51:57 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:30.659 10:51:57 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:30.659 10:51:57 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:30.659 10:51:57 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143960 0 00:26:30.659 10:51:57 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143960 0 idle 00:26:30.659 10:51:57 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:30.659 10:51:57 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:30.660 10:51:57 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143960 root 20 0 20.1t 58088 26124 S 0.0 0.5 0:00.35 reactor_0' 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@48 -- # echo 143960 root 20 0 20.1t 58088 26124 S 0.0 0.5 0:00.35 reactor_0 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:30.919 10:51:57 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:30.919 10:51:57 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143960 1 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143960 1 idle 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:30.919 10:51:57 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143963 root 20 0 20.1t 58088 26124 S 0.0 0.5 0:00.00 reactor_1' 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # echo 143963 root 20 0 20.1t 58088 26124 S 0.0 0.5 0:00.00 reactor_1 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:31.179 10:51:57 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:31.179 10:51:57 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 143960 2 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143960 2 idle 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143964 root 20 0 20.1t 58088 26124 S 0.0 0.5 0:00.00 reactor_2' 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # echo 143964 root 20 0 20.1t 58088 26124 S 0.0 0.5 0:00.00 reactor_2 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:31.179 10:51:57 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:31.179 10:51:57 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:31.179 10:51:57 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:31.438 [2024-07-24 10:51:58.025268] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:31.438 [2024-07-24 10:51:58.025620] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:31.438 [2024-07-24 10:51:58.025853] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:31.438 10:51:58 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:31.697 [2024-07-24 10:51:58.269206] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:31.697 [2024-07-24 10:51:58.269686] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:31.697 10:51:58 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:31.697 10:51:58 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143960 0 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143960 0 busy 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:31.697 10:51:58 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:31.955 10:51:58 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143960 root 20 0 20.1t 58216 26124 R 99.9 0.5 0:00.79 reactor_0' 00:26:31.955 10:51:58 -- interrupt/interrupt_common.sh@48 -- # echo 143960 root 20 0 20.1t 58216 26124 R 99.9 0.5 0:00.79 reactor_0 00:26:31.955 10:51:58 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.955 10:51:58 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:31.956 10:51:58 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:31.956 10:51:58 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 143960 2 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 143960 2 busy 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143964 root 20 0 20.1t 58216 26124 R 99.9 0.5 0:00.35 reactor_2' 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@48 -- # echo 143964 root 20 0 20.1t 58216 26124 R 99.9 0.5 0:00.35 reactor_2 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:31.956 10:51:58 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:32.214 10:51:58 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:32.214 10:51:58 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:32.214 10:51:58 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:32.474 [2024-07-24 10:51:58.909485] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:32.474 [2024-07-24 10:51:58.909755] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:32.474 10:51:58 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:32.474 10:51:58 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 143960 2 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143960 2 idle 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:32.474 10:51:58 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143964 root 20 0 20.1t 58272 26124 S 0.0 0.5 0:00.64 reactor_2' 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@48 -- # echo 143964 root 20 0 20.1t 58272 26124 S 0.0 0.5 0:00.64 reactor_2 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:32.474 10:51:59 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:32.474 10:51:59 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:32.733 [2024-07-24 10:51:59.317507] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:32.734 [2024-07-24 10:51:59.317895] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:32.734 [2024-07-24 10:51:59.317962] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:32.734 10:51:59 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:32.734 10:51:59 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 143960 0 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 143960 0 idle 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@33 -- # local pid=143960 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 143960 -w 256 00:26:32.734 10:51:59 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:32.992 10:51:59 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 143960 root 20 0 20.1t 58328 26124 S 0.0 0.5 0:01.66 reactor_0' 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@48 -- # echo 143960 root 20 0 20.1t 58328 26124 S 0.0 0.5 0:01.66 reactor_0 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:32.993 10:51:59 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:32.993 10:51:59 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:32.993 10:51:59 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:32.993 10:51:59 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:32.993 10:51:59 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 143960 00:26:32.993 10:51:59 -- common/autotest_common.sh@926 -- # '[' -z 143960 ']' 00:26:32.993 10:51:59 -- common/autotest_common.sh@930 -- # kill -0 143960 00:26:32.993 10:51:59 -- common/autotest_common.sh@931 -- # uname 00:26:32.993 10:51:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:32.993 10:51:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 143960 00:26:32.993 10:51:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:32.993 10:51:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:32.993 killing process with pid 143960 00:26:32.993 10:51:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 143960' 00:26:32.993 10:51:59 -- common/autotest_common.sh@945 -- # kill 143960 00:26:32.993 10:51:59 -- common/autotest_common.sh@950 -- # wait 143960 00:26:33.251 10:51:59 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:33.251 10:51:59 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:33.251 00:26:33.251 real 0m10.258s 00:26:33.251 user 0m10.141s 00:26:33.251 sys 0m1.691s 00:26:33.251 10:51:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.251 10:51:59 -- common/autotest_common.sh@10 -- # set +x 00:26:33.251 ************************************ 00:26:33.251 END TEST reactor_set_interrupt 00:26:33.251 ************************************ 00:26:33.251 10:51:59 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:33.251 10:51:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:33.251 10:51:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:33.251 10:51:59 -- common/autotest_common.sh@10 -- # set +x 00:26:33.251 ************************************ 00:26:33.251 START TEST reap_unregistered_poller 00:26:33.251 ************************************ 00:26:33.251 10:51:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:33.514 * Looking for test storage... 00:26:33.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.514 10:51:59 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:33.514 10:51:59 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:33.514 10:51:59 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.514 10:51:59 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.514 10:51:59 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:33.514 10:51:59 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:33.514 10:51:59 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:33.514 10:51:59 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:33.514 10:51:59 -- common/autotest_common.sh@34 -- # set -e 00:26:33.514 10:51:59 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:33.514 10:51:59 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:33.514 10:51:59 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:33.514 10:51:59 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:33.514 10:51:59 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:26:33.514 10:51:59 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:26:33.514 10:51:59 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:26:33.514 10:51:59 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:33.514 10:51:59 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:26:33.514 10:51:59 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:26:33.514 10:51:59 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:26:33.514 10:51:59 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:26:33.514 10:51:59 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:26:33.514 10:51:59 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:26:33.514 10:51:59 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:26:33.514 10:51:59 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:26:33.514 10:51:59 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:26:33.514 10:51:59 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:26:33.514 10:51:59 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:33.514 10:51:59 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:26:33.514 10:51:59 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:26:33.514 10:51:59 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:33.514 10:51:59 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:33.514 10:51:59 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:26:33.514 10:51:59 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:26:33.514 10:51:59 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:26:33.514 10:51:59 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:33.514 10:51:59 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:26:33.514 10:51:59 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:26:33.514 10:51:59 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:33.514 10:51:59 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:33.514 10:51:59 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:26:33.514 10:51:59 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:26:33.514 10:51:59 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:26:33.514 10:51:59 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:26:33.514 10:51:59 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:26:33.514 10:51:59 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:26:33.514 10:51:59 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:26:33.514 10:51:59 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:26:33.514 10:51:59 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:26:33.514 10:51:59 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:26:33.514 10:51:59 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:26:33.514 10:51:59 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:26:33.514 10:51:59 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:26:33.514 10:51:59 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:26:33.514 10:51:59 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:26:33.514 10:51:59 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:26:33.514 10:51:59 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:33.514 10:51:59 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:26:33.514 10:51:59 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:26:33.514 10:51:59 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:26:33.514 10:51:59 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:33.514 10:51:59 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:26:33.514 10:51:59 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:26:33.514 10:51:59 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:26:33.514 10:51:59 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:26:33.514 10:51:59 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:26:33.514 10:51:59 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:26:33.514 10:51:59 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:26:33.514 10:51:59 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:26:33.514 10:51:59 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:26:33.514 10:51:59 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:26:33.514 10:51:59 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:26:33.514 10:51:59 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:26:33.514 10:51:59 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:33.514 10:51:59 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:26:33.514 10:51:59 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:26:33.514 10:51:59 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:26:33.514 10:51:59 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:26:33.514 10:51:59 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:33.514 10:51:59 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:26:33.514 10:51:59 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:26:33.514 10:51:59 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:26:33.514 10:51:59 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:26:33.514 10:51:59 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:26:33.514 10:51:59 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:26:33.514 10:51:59 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:26:33.514 10:51:59 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:26:33.514 10:51:59 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:26:33.514 10:51:59 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:26:33.514 10:51:59 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:33.514 10:51:59 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:26:33.514 10:51:59 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:33.514 10:51:59 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:33.514 10:51:59 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:33.514 10:51:59 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:33.514 10:51:59 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:33.514 10:51:59 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:33.514 10:51:59 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:33.514 10:51:59 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:33.514 10:51:59 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:33.514 10:51:59 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:33.514 10:51:59 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:33.514 10:51:59 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:33.514 10:51:59 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:33.514 10:51:59 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:33.514 10:51:59 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:33.514 10:51:59 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:33.514 10:51:59 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:33.515 #define SPDK_CONFIG_H 00:26:33.515 #define SPDK_CONFIG_APPS 1 00:26:33.515 #define SPDK_CONFIG_ARCH native 00:26:33.515 #define SPDK_CONFIG_ASAN 1 00:26:33.515 #undef SPDK_CONFIG_AVAHI 00:26:33.515 #undef SPDK_CONFIG_CET 00:26:33.515 #define SPDK_CONFIG_COVERAGE 1 00:26:33.515 #define SPDK_CONFIG_CROSS_PREFIX 00:26:33.515 #undef SPDK_CONFIG_CRYPTO 00:26:33.515 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:33.515 #undef SPDK_CONFIG_CUSTOMOCF 00:26:33.515 #undef SPDK_CONFIG_DAOS 00:26:33.515 #define SPDK_CONFIG_DAOS_DIR 00:26:33.515 #define SPDK_CONFIG_DEBUG 1 00:26:33.515 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:33.515 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:26:33.515 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:26:33.515 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:26:33.515 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:33.515 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:33.515 #define SPDK_CONFIG_EXAMPLES 1 00:26:33.515 #undef SPDK_CONFIG_FC 00:26:33.515 #define SPDK_CONFIG_FC_PATH 00:26:33.515 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:33.515 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:33.515 #undef SPDK_CONFIG_FUSE 00:26:33.515 #undef SPDK_CONFIG_FUZZER 00:26:33.515 #define SPDK_CONFIG_FUZZER_LIB 00:26:33.515 #undef SPDK_CONFIG_GOLANG 00:26:33.515 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:33.515 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:33.515 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:33.515 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:33.515 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:33.515 #define SPDK_CONFIG_IDXD 1 00:26:33.515 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:33.515 #undef SPDK_CONFIG_IPSEC_MB 00:26:33.515 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:33.515 #define SPDK_CONFIG_ISAL 1 00:26:33.515 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:33.515 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:33.515 #define SPDK_CONFIG_LIBDIR 00:26:33.515 #undef SPDK_CONFIG_LTO 00:26:33.515 #define SPDK_CONFIG_MAX_LCORES 00:26:33.515 #define SPDK_CONFIG_NVME_CUSE 1 00:26:33.515 #undef SPDK_CONFIG_OCF 00:26:33.515 #define SPDK_CONFIG_OCF_PATH 00:26:33.515 #define SPDK_CONFIG_OPENSSL_PATH 00:26:33.515 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:33.515 #undef SPDK_CONFIG_PGO_USE 00:26:33.515 #define SPDK_CONFIG_PREFIX /usr/local 00:26:33.515 #define SPDK_CONFIG_RAID5F 1 00:26:33.515 #undef SPDK_CONFIG_RBD 00:26:33.515 #define SPDK_CONFIG_RDMA 1 00:26:33.515 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:33.515 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:33.515 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:33.515 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:33.515 #undef SPDK_CONFIG_SHARED 00:26:33.515 #undef SPDK_CONFIG_SMA 00:26:33.515 #define SPDK_CONFIG_TESTS 1 00:26:33.515 #undef SPDK_CONFIG_TSAN 00:26:33.515 #undef SPDK_CONFIG_UBLK 00:26:33.515 #define SPDK_CONFIG_UBSAN 1 00:26:33.515 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:33.515 #undef SPDK_CONFIG_URING 00:26:33.515 #define SPDK_CONFIG_URING_PATH 00:26:33.515 #undef SPDK_CONFIG_URING_ZNS 00:26:33.515 #undef SPDK_CONFIG_USDT 00:26:33.515 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:33.515 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:33.515 #undef SPDK_CONFIG_VFIO_USER 00:26:33.515 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:33.515 #define SPDK_CONFIG_VHOST 1 00:26:33.515 #define SPDK_CONFIG_VIRTIO 1 00:26:33.515 #undef SPDK_CONFIG_VTUNE 00:26:33.515 #define SPDK_CONFIG_VTUNE_DIR 00:26:33.515 #define SPDK_CONFIG_WERROR 1 00:26:33.515 #define SPDK_CONFIG_WPDK_DIR 00:26:33.515 #undef SPDK_CONFIG_XNVME 00:26:33.515 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:33.515 10:51:59 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:33.515 10:51:59 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:33.515 10:51:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.515 10:51:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.515 10:51:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.515 10:51:59 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.515 10:51:59 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.515 10:51:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.515 10:51:59 -- paths/export.sh@5 -- # export PATH 00:26:33.515 10:51:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:33.515 10:51:59 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:33.515 10:51:59 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:33.515 10:52:00 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:33.515 10:52:00 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:33.515 10:52:00 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:33.515 10:52:00 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:33.515 10:52:00 -- pm/common@16 -- # TEST_TAG=N/A 00:26:33.515 10:52:00 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:33.515 10:52:00 -- common/autotest_common.sh@52 -- # : 1 00:26:33.515 10:52:00 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:33.515 10:52:00 -- common/autotest_common.sh@56 -- # : 0 00:26:33.515 10:52:00 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:33.515 10:52:00 -- common/autotest_common.sh@58 -- # : 0 00:26:33.515 10:52:00 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:33.515 10:52:00 -- common/autotest_common.sh@60 -- # : 1 00:26:33.515 10:52:00 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:33.515 10:52:00 -- common/autotest_common.sh@62 -- # : 1 00:26:33.515 10:52:00 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:33.515 10:52:00 -- common/autotest_common.sh@64 -- # : 00:26:33.515 10:52:00 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:33.515 10:52:00 -- common/autotest_common.sh@66 -- # : 0 00:26:33.515 10:52:00 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:33.515 10:52:00 -- common/autotest_common.sh@68 -- # : 0 00:26:33.515 10:52:00 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:33.515 10:52:00 -- common/autotest_common.sh@70 -- # : 0 00:26:33.515 10:52:00 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:33.516 10:52:00 -- common/autotest_common.sh@72 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:33.516 10:52:00 -- common/autotest_common.sh@74 -- # : 1 00:26:33.516 10:52:00 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:33.516 10:52:00 -- common/autotest_common.sh@76 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:33.516 10:52:00 -- common/autotest_common.sh@78 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:33.516 10:52:00 -- common/autotest_common.sh@80 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:33.516 10:52:00 -- common/autotest_common.sh@82 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:33.516 10:52:00 -- common/autotest_common.sh@84 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:33.516 10:52:00 -- common/autotest_common.sh@86 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:33.516 10:52:00 -- common/autotest_common.sh@88 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:33.516 10:52:00 -- common/autotest_common.sh@90 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:33.516 10:52:00 -- common/autotest_common.sh@92 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:33.516 10:52:00 -- common/autotest_common.sh@94 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:33.516 10:52:00 -- common/autotest_common.sh@96 -- # : rdma 00:26:33.516 10:52:00 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:33.516 10:52:00 -- common/autotest_common.sh@98 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:33.516 10:52:00 -- common/autotest_common.sh@100 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:33.516 10:52:00 -- common/autotest_common.sh@102 -- # : 1 00:26:33.516 10:52:00 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:33.516 10:52:00 -- common/autotest_common.sh@104 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:33.516 10:52:00 -- common/autotest_common.sh@106 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:33.516 10:52:00 -- common/autotest_common.sh@108 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:33.516 10:52:00 -- common/autotest_common.sh@110 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:33.516 10:52:00 -- common/autotest_common.sh@112 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:33.516 10:52:00 -- common/autotest_common.sh@114 -- # : 1 00:26:33.516 10:52:00 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:33.516 10:52:00 -- common/autotest_common.sh@116 -- # : 1 00:26:33.516 10:52:00 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:33.516 10:52:00 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:26:33.516 10:52:00 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:33.516 10:52:00 -- common/autotest_common.sh@120 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:33.516 10:52:00 -- common/autotest_common.sh@122 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:33.516 10:52:00 -- common/autotest_common.sh@124 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:33.516 10:52:00 -- common/autotest_common.sh@126 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:33.516 10:52:00 -- common/autotest_common.sh@128 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:33.516 10:52:00 -- common/autotest_common.sh@130 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:33.516 10:52:00 -- common/autotest_common.sh@132 -- # : v22.11.4 00:26:33.516 10:52:00 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:33.516 10:52:00 -- common/autotest_common.sh@134 -- # : true 00:26:33.516 10:52:00 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:33.516 10:52:00 -- common/autotest_common.sh@136 -- # : 1 00:26:33.516 10:52:00 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:33.516 10:52:00 -- common/autotest_common.sh@138 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:33.516 10:52:00 -- common/autotest_common.sh@140 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:33.516 10:52:00 -- common/autotest_common.sh@142 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:33.516 10:52:00 -- common/autotest_common.sh@144 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:33.516 10:52:00 -- common/autotest_common.sh@146 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:33.516 10:52:00 -- common/autotest_common.sh@148 -- # : 00:26:33.516 10:52:00 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:33.516 10:52:00 -- common/autotest_common.sh@150 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:33.516 10:52:00 -- common/autotest_common.sh@152 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:33.516 10:52:00 -- common/autotest_common.sh@154 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:33.516 10:52:00 -- common/autotest_common.sh@156 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:33.516 10:52:00 -- common/autotest_common.sh@158 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:33.516 10:52:00 -- common/autotest_common.sh@160 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:33.516 10:52:00 -- common/autotest_common.sh@163 -- # : 00:26:33.516 10:52:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:33.516 10:52:00 -- common/autotest_common.sh@165 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:33.516 10:52:00 -- common/autotest_common.sh@167 -- # : 0 00:26:33.516 10:52:00 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:33.516 10:52:00 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:33.516 10:52:00 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:33.516 10:52:00 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:33.516 10:52:00 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:33.516 10:52:00 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:33.516 10:52:00 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:33.516 10:52:00 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:33.516 10:52:00 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:33.516 10:52:00 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:33.516 10:52:00 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:33.517 10:52:00 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:33.517 10:52:00 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:33.517 10:52:00 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:33.517 10:52:00 -- common/autotest_common.sh@196 -- # cat 00:26:33.517 10:52:00 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:33.517 10:52:00 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:33.517 10:52:00 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:33.517 10:52:00 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:33.517 10:52:00 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:33.517 10:52:00 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:33.517 10:52:00 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:33.517 10:52:00 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:33.517 10:52:00 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:33.517 10:52:00 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:33.517 10:52:00 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:33.517 10:52:00 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:33.517 10:52:00 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:33.517 10:52:00 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:33.517 10:52:00 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:33.517 10:52:00 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:33.517 10:52:00 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:33.517 10:52:00 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.517 10:52:00 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:33.517 10:52:00 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:33.517 10:52:00 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:33.517 10:52:00 -- common/autotest_common.sh@249 -- # valgrind= 00:26:33.517 10:52:00 -- common/autotest_common.sh@255 -- # uname -s 00:26:33.517 10:52:00 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:33.517 10:52:00 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:33.517 10:52:00 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:33.517 10:52:00 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:33.517 10:52:00 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:33.517 10:52:00 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:33.517 10:52:00 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:33.517 10:52:00 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:33.517 10:52:00 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:33.517 10:52:00 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:33.517 10:52:00 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:33.517 10:52:00 -- common/autotest_common.sh@309 -- # [[ -z 144123 ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@309 -- # kill -0 144123 00:26:33.517 10:52:00 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:33.517 10:52:00 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:33.517 10:52:00 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:33.517 10:52:00 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:33.517 10:52:00 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:33.517 10:52:00 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:33.517 10:52:00 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:33.517 10:52:00 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.f8FzkZ 00:26:33.517 10:52:00 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:33.517 10:52:00 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.f8FzkZ/tests/interrupt /tmp/spdk.f8FzkZ 00:26:33.517 10:52:00 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@318 -- # df -T 00:26:33.517 10:52:00 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=9443631104 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=11156385792 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267146240 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest_2/ubuntu2204-libvirt/output 00:26:33.517 10:52:00 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # avails["$mount"]=93729120256 00:26:33.517 10:52:00 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:33.517 10:52:00 -- common/autotest_common.sh@354 -- # uses["$mount"]=5973659648 00:26:33.517 10:52:00 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:33.517 10:52:00 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:33.517 * Looking for test storage... 00:26:33.517 10:52:00 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:33.517 10:52:00 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:33.517 10:52:00 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.517 10:52:00 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:33.517 10:52:00 -- common/autotest_common.sh@363 -- # mount=/ 00:26:33.517 10:52:00 -- common/autotest_common.sh@365 -- # target_space=9443631104 00:26:33.517 10:52:00 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:33.517 10:52:00 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:33.517 10:52:00 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:33.517 10:52:00 -- common/autotest_common.sh@372 -- # new_size=13370978304 00:26:33.517 10:52:00 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:33.518 10:52:00 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.518 10:52:00 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.518 10:52:00 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:33.518 10:52:00 -- common/autotest_common.sh@380 -- # return 0 00:26:33.518 10:52:00 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:33.518 10:52:00 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:33.518 10:52:00 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:33.518 10:52:00 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:33.518 10:52:00 -- common/autotest_common.sh@1672 -- # true 00:26:33.518 10:52:00 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:33.518 10:52:00 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:33.518 10:52:00 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:33.518 10:52:00 -- common/autotest_common.sh@27 -- # exec 00:26:33.518 10:52:00 -- common/autotest_common.sh@29 -- # exec 00:26:33.518 10:52:00 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:33.518 10:52:00 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:33.518 10:52:00 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:33.518 10:52:00 -- common/autotest_common.sh@18 -- # set -x 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:33.518 10:52:00 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:33.518 10:52:00 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:33.518 10:52:00 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=144172 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:33.518 10:52:00 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 144172 /var/tmp/spdk.sock 00:26:33.518 10:52:00 -- common/autotest_common.sh@819 -- # '[' -z 144172 ']' 00:26:33.518 10:52:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:33.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:33.518 10:52:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:33.518 10:52:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:33.518 10:52:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:33.518 10:52:00 -- common/autotest_common.sh@10 -- # set +x 00:26:33.518 [2024-07-24 10:52:00.111195] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:33.518 [2024-07-24 10:52:00.111444] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144172 ] 00:26:33.777 [2024-07-24 10:52:00.263489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:33.777 [2024-07-24 10:52:00.347231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.777 [2024-07-24 10:52:00.347398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.777 [2024-07-24 10:52:00.347402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:33.777 [2024-07-24 10:52:00.436309] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:34.717 10:52:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:34.717 10:52:01 -- common/autotest_common.sh@852 -- # return 0 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:34.717 10:52:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:34.717 10:52:01 -- common/autotest_common.sh@10 -- # set +x 00:26:34.717 10:52:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:34.717 "name": "app_thread", 00:26:34.717 "id": 1, 00:26:34.717 "active_pollers": [], 00:26:34.717 "timed_pollers": [ 00:26:34.717 { 00:26:34.717 "name": "rpc_subsystem_poll", 00:26:34.717 "id": 1, 00:26:34.717 "state": "waiting", 00:26:34.717 "run_count": 0, 00:26:34.717 "busy_count": 0, 00:26:34.717 "period_ticks": 8800000 00:26:34.717 } 00:26:34.717 ], 00:26:34.717 "paused_pollers": [] 00:26:34.717 }' 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:34.717 10:52:01 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:34.717 10:52:01 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:34.717 10:52:01 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:34.717 10:52:01 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:34.717 5000+0 records in 00:26:34.717 5000+0 records out 00:26:34.717 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0250348 s, 409 MB/s 00:26:34.717 10:52:01 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:34.975 AIO0 00:26:34.975 10:52:01 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:35.234 10:52:01 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:35.493 10:52:01 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:35.493 10:52:01 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:35.493 10:52:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:35.493 10:52:01 -- common/autotest_common.sh@10 -- # set +x 00:26:35.493 10:52:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:35.493 "name": "app_thread", 00:26:35.493 "id": 1, 00:26:35.493 "active_pollers": [], 00:26:35.493 "timed_pollers": [ 00:26:35.493 { 00:26:35.493 "name": "rpc_subsystem_poll", 00:26:35.493 "id": 1, 00:26:35.493 "state": "waiting", 00:26:35.493 "run_count": 0, 00:26:35.493 "busy_count": 0, 00:26:35.493 "period_ticks": 8800000 00:26:35.493 } 00:26:35.493 ], 00:26:35.493 "paused_pollers": [] 00:26:35.493 }' 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:35.493 10:52:02 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 144172 00:26:35.493 10:52:02 -- common/autotest_common.sh@926 -- # '[' -z 144172 ']' 00:26:35.493 10:52:02 -- common/autotest_common.sh@930 -- # kill -0 144172 00:26:35.493 10:52:02 -- common/autotest_common.sh@931 -- # uname 00:26:35.493 10:52:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:35.493 10:52:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144172 00:26:35.493 10:52:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:35.493 killing process with pid 144172 00:26:35.493 10:52:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:35.493 10:52:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144172' 00:26:35.493 10:52:02 -- common/autotest_common.sh@945 -- # kill 144172 00:26:35.493 10:52:02 -- common/autotest_common.sh@950 -- # wait 144172 00:26:36.060 10:52:02 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:36.060 10:52:02 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:36.060 00:26:36.060 real 0m2.567s 00:26:36.060 user 0m1.780s 00:26:36.060 sys 0m0.493s 00:26:36.060 10:52:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.060 10:52:02 -- common/autotest_common.sh@10 -- # set +x 00:26:36.060 ************************************ 00:26:36.060 END TEST reap_unregistered_poller 00:26:36.060 ************************************ 00:26:36.060 10:52:02 -- spdk/autotest.sh@204 -- # uname -s 00:26:36.060 10:52:02 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:26:36.060 10:52:02 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:26:36.060 10:52:02 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:26:36.060 10:52:02 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:36.060 10:52:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:36.060 10:52:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:36.060 10:52:02 -- common/autotest_common.sh@10 -- # set +x 00:26:36.060 ************************************ 00:26:36.060 START TEST spdk_dd 00:26:36.060 ************************************ 00:26:36.060 10:52:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:36.060 * Looking for test storage... 00:26:36.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:36.060 10:52:02 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.060 10:52:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.060 10:52:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.060 10:52:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.060 10:52:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.060 10:52:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.060 10:52:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.060 10:52:02 -- paths/export.sh@5 -- # export PATH 00:26:36.061 10:52:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.061 10:52:02 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:36.319 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:36.319 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:37.698 10:52:04 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:37.698 10:52:04 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:37.698 10:52:04 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:37.698 10:52:04 -- scripts/common.sh@312 -- # local nvmes 00:26:37.698 10:52:04 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:37.698 10:52:04 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:37.698 10:52:04 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:37.698 10:52:04 -- scripts/common.sh@297 -- # local bdf= 00:26:37.698 10:52:04 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:37.698 10:52:04 -- scripts/common.sh@232 -- # local class 00:26:37.698 10:52:04 -- scripts/common.sh@233 -- # local subclass 00:26:37.698 10:52:04 -- scripts/common.sh@234 -- # local progif 00:26:37.698 10:52:04 -- scripts/common.sh@235 -- # printf %02x 1 00:26:37.698 10:52:04 -- scripts/common.sh@235 -- # class=01 00:26:37.698 10:52:04 -- scripts/common.sh@236 -- # printf %02x 8 00:26:37.698 10:52:04 -- scripts/common.sh@236 -- # subclass=08 00:26:37.698 10:52:04 -- scripts/common.sh@237 -- # printf %02x 2 00:26:37.698 10:52:04 -- scripts/common.sh@237 -- # progif=02 00:26:37.698 10:52:04 -- scripts/common.sh@239 -- # hash lspci 00:26:37.698 10:52:04 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:37.698 10:52:04 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:37.698 10:52:04 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:37.698 10:52:04 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:37.698 10:52:04 -- scripts/common.sh@244 -- # tr -d '"' 00:26:37.698 10:52:04 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:37.698 10:52:04 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:37.698 10:52:04 -- scripts/common.sh@15 -- # local i 00:26:37.698 10:52:04 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:37.698 10:52:04 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:37.698 10:52:04 -- scripts/common.sh@24 -- # return 0 00:26:37.698 10:52:04 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:37.698 10:52:04 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:37.698 10:52:04 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:37.698 10:52:04 -- scripts/common.sh@322 -- # uname -s 00:26:37.698 10:52:04 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:37.698 10:52:04 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:37.698 10:52:04 -- scripts/common.sh@327 -- # (( 1 )) 00:26:37.698 10:52:04 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:37.698 10:52:04 -- dd/dd.sh@13 -- # check_liburing 00:26:37.698 10:52:04 -- dd/common.sh@139 -- # local lib so 00:26:37.698 10:52:04 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:37.698 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.698 10:52:04 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:37.698 10:52:04 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.698 10:52:04 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:37.698 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:37.699 10:52:04 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:37.699 10:52:04 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:37.699 10:52:04 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:37.699 10:52:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:37.699 10:52:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.699 10:52:04 -- common/autotest_common.sh@10 -- # set +x 00:26:37.699 ************************************ 00:26:37.699 START TEST spdk_dd_basic_rw 00:26:37.699 ************************************ 00:26:37.699 10:52:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:37.699 * Looking for test storage... 00:26:37.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:37.699 10:52:04 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:37.699 10:52:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.699 10:52:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.699 10:52:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.699 10:52:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:37.699 10:52:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:37.699 10:52:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:37.699 10:52:04 -- paths/export.sh@5 -- # export PATH 00:26:37.699 10:52:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:37.699 10:52:04 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:37.699 10:52:04 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:37.699 10:52:04 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:37.699 10:52:04 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:37.699 10:52:04 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:37.699 10:52:04 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:37.699 10:52:04 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:37.699 10:52:04 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:37.699 10:52:04 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:37.699 10:52:04 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:37.699 10:52:04 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:37.699 10:52:04 -- dd/common.sh@126 -- # mapfile -t id 00:26:37.699 10:52:04 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:26:37.960 10:52:04 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 99 Data Units Written: 7 Host Read Commands: 2153 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:26:37.960 10:52:04 -- dd/common.sh@130 -- # lbaf=04 00:26:37.961 10:52:04 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 99 Data Units Written: 7 Host Read Commands: 2153 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:26:37.961 10:52:04 -- dd/common.sh@132 -- # lbaf=4096 00:26:37.961 10:52:04 -- dd/common.sh@134 -- # echo 4096 00:26:37.961 10:52:04 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:26:37.961 10:52:04 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:37.961 10:52:04 -- dd/basic_rw.sh@96 -- # gen_conf 00:26:37.961 10:52:04 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:37.961 10:52:04 -- dd/basic_rw.sh@96 -- # : 00:26:37.961 10:52:04 -- dd/common.sh@31 -- # xtrace_disable 00:26:37.961 10:52:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:37.961 10:52:04 -- common/autotest_common.sh@10 -- # set +x 00:26:37.961 10:52:04 -- common/autotest_common.sh@10 -- # set +x 00:26:37.961 ************************************ 00:26:37.961 START TEST dd_bs_lt_native_bs 00:26:37.961 ************************************ 00:26:37.961 10:52:04 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:37.961 10:52:04 -- common/autotest_common.sh@640 -- # local es=0 00:26:37.961 10:52:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:37.961 10:52:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.961 10:52:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:37.961 10:52:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.961 10:52:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:37.961 10:52:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.961 10:52:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:37.961 10:52:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:37.961 10:52:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:37.961 10:52:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:37.961 { 00:26:37.961 "subsystems": [ 00:26:37.961 { 00:26:37.961 "subsystem": "bdev", 00:26:37.961 "config": [ 00:26:37.961 { 00:26:37.961 "params": { 00:26:37.961 "trtype": "pcie", 00:26:37.961 "traddr": "0000:00:06.0", 00:26:37.961 "name": "Nvme0" 00:26:37.961 }, 00:26:37.961 "method": "bdev_nvme_attach_controller" 00:26:37.961 }, 00:26:37.961 { 00:26:37.961 "method": "bdev_wait_for_examine" 00:26:37.961 } 00:26:37.961 ] 00:26:37.961 } 00:26:37.961 ] 00:26:37.961 } 00:26:37.961 [2024-07-24 10:52:04.517092] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:37.961 [2024-07-24 10:52:04.517334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144475 ] 00:26:38.221 [2024-07-24 10:52:04.669407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.221 [2024-07-24 10:52:04.753126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.480 [2024-07-24 10:52:04.921606] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:26:38.480 [2024-07-24 10:52:04.921764] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:38.480 [2024-07-24 10:52:05.059723] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:38.739 10:52:05 -- common/autotest_common.sh@643 -- # es=234 00:26:38.739 10:52:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:38.739 ************************************ 00:26:38.739 END TEST dd_bs_lt_native_bs 00:26:38.739 ************************************ 00:26:38.739 10:52:05 -- common/autotest_common.sh@652 -- # es=106 00:26:38.739 10:52:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:38.739 10:52:05 -- common/autotest_common.sh@660 -- # es=1 00:26:38.739 10:52:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:38.739 00:26:38.739 real 0m0.732s 00:26:38.739 user 0m0.485s 00:26:38.739 sys 0m0.211s 00:26:38.739 10:52:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:38.739 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:26:38.739 10:52:05 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:26:38.739 10:52:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:38.739 10:52:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:38.739 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:26:38.739 ************************************ 00:26:38.739 START TEST dd_rw 00:26:38.739 ************************************ 00:26:38.739 10:52:05 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:26:38.739 10:52:05 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:26:38.739 10:52:05 -- dd/basic_rw.sh@12 -- # local count size 00:26:38.739 10:52:05 -- dd/basic_rw.sh@13 -- # local qds bss 00:26:38.739 10:52:05 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:26:38.739 10:52:05 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:38.739 10:52:05 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:38.739 10:52:05 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:38.739 10:52:05 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:38.739 10:52:05 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:38.739 10:52:05 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:38.739 10:52:05 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:38.739 10:52:05 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:38.739 10:52:05 -- dd/basic_rw.sh@23 -- # count=15 00:26:38.739 10:52:05 -- dd/basic_rw.sh@24 -- # count=15 00:26:38.739 10:52:05 -- dd/basic_rw.sh@25 -- # size=61440 00:26:38.739 10:52:05 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:38.739 10:52:05 -- dd/common.sh@98 -- # xtrace_disable 00:26:38.739 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:26:39.309 10:52:05 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:26:39.309 10:52:05 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:39.309 10:52:05 -- dd/common.sh@31 -- # xtrace_disable 00:26:39.309 10:52:05 -- common/autotest_common.sh@10 -- # set +x 00:26:39.309 { 00:26:39.309 "subsystems": [ 00:26:39.309 { 00:26:39.309 "subsystem": "bdev", 00:26:39.309 "config": [ 00:26:39.309 { 00:26:39.309 "params": { 00:26:39.309 "trtype": "pcie", 00:26:39.309 "traddr": "0000:00:06.0", 00:26:39.309 "name": "Nvme0" 00:26:39.309 }, 00:26:39.309 "method": "bdev_nvme_attach_controller" 00:26:39.309 }, 00:26:39.309 { 00:26:39.309 "method": "bdev_wait_for_examine" 00:26:39.309 } 00:26:39.309 ] 00:26:39.309 } 00:26:39.309 ] 00:26:39.309 } 00:26:39.309 [2024-07-24 10:52:05.888069] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:39.309 [2024-07-24 10:52:05.888374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144514 ] 00:26:39.567 [2024-07-24 10:52:06.046181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.567 [2024-07-24 10:52:06.142784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.133  Copying: 60/60 [kB] (average 29 MBps) 00:26:40.133 00:26:40.133 10:52:06 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:40.133 10:52:06 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:40.133 10:52:06 -- dd/common.sh@31 -- # xtrace_disable 00:26:40.133 10:52:06 -- common/autotest_common.sh@10 -- # set +x 00:26:40.133 [2024-07-24 10:52:06.660484] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:40.133 [2024-07-24 10:52:06.660801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144532 ] 00:26:40.133 { 00:26:40.133 "subsystems": [ 00:26:40.133 { 00:26:40.133 "subsystem": "bdev", 00:26:40.133 "config": [ 00:26:40.134 { 00:26:40.134 "params": { 00:26:40.134 "trtype": "pcie", 00:26:40.134 "traddr": "0000:00:06.0", 00:26:40.134 "name": "Nvme0" 00:26:40.134 }, 00:26:40.134 "method": "bdev_nvme_attach_controller" 00:26:40.134 }, 00:26:40.134 { 00:26:40.134 "method": "bdev_wait_for_examine" 00:26:40.134 } 00:26:40.134 ] 00:26:40.134 } 00:26:40.134 ] 00:26:40.134 } 00:26:40.134 [2024-07-24 10:52:06.814334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.392 [2024-07-24 10:52:06.891616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.651  Copying: 60/60 [kB] (average 19 MBps) 00:26:40.651 00:26:40.909 10:52:07 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:40.909 10:52:07 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:40.909 10:52:07 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:40.909 10:52:07 -- dd/common.sh@11 -- # local nvme_ref= 00:26:40.909 10:52:07 -- dd/common.sh@12 -- # local size=61440 00:26:40.909 10:52:07 -- dd/common.sh@14 -- # local bs=1048576 00:26:40.909 10:52:07 -- dd/common.sh@15 -- # local count=1 00:26:40.909 10:52:07 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:40.909 10:52:07 -- dd/common.sh@18 -- # gen_conf 00:26:40.909 10:52:07 -- dd/common.sh@31 -- # xtrace_disable 00:26:40.909 10:52:07 -- common/autotest_common.sh@10 -- # set +x 00:26:40.909 [2024-07-24 10:52:07.401318] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:40.909 [2024-07-24 10:52:07.401579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144553 ] 00:26:40.909 { 00:26:40.909 "subsystems": [ 00:26:40.909 { 00:26:40.909 "subsystem": "bdev", 00:26:40.909 "config": [ 00:26:40.909 { 00:26:40.909 "params": { 00:26:40.909 "trtype": "pcie", 00:26:40.909 "traddr": "0000:00:06.0", 00:26:40.909 "name": "Nvme0" 00:26:40.909 }, 00:26:40.909 "method": "bdev_nvme_attach_controller" 00:26:40.909 }, 00:26:40.909 { 00:26:40.909 "method": "bdev_wait_for_examine" 00:26:40.909 } 00:26:40.909 ] 00:26:40.909 } 00:26:40.909 ] 00:26:40.909 } 00:26:40.909 [2024-07-24 10:52:07.548099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.168 [2024-07-24 10:52:07.628649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.427  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:41.427 00:26:41.427 10:52:08 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:41.427 10:52:08 -- dd/basic_rw.sh@23 -- # count=15 00:26:41.427 10:52:08 -- dd/basic_rw.sh@24 -- # count=15 00:26:41.427 10:52:08 -- dd/basic_rw.sh@25 -- # size=61440 00:26:41.427 10:52:08 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:41.427 10:52:08 -- dd/common.sh@98 -- # xtrace_disable 00:26:41.427 10:52:08 -- common/autotest_common.sh@10 -- # set +x 00:26:41.995 10:52:08 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:26:41.995 10:52:08 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:41.995 10:52:08 -- dd/common.sh@31 -- # xtrace_disable 00:26:41.995 10:52:08 -- common/autotest_common.sh@10 -- # set +x 00:26:42.254 [2024-07-24 10:52:08.717087] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:42.254 [2024-07-24 10:52:08.717434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144573 ] 00:26:42.254 { 00:26:42.254 "subsystems": [ 00:26:42.254 { 00:26:42.254 "subsystem": "bdev", 00:26:42.254 "config": [ 00:26:42.254 { 00:26:42.254 "params": { 00:26:42.254 "trtype": "pcie", 00:26:42.254 "traddr": "0000:00:06.0", 00:26:42.254 "name": "Nvme0" 00:26:42.254 }, 00:26:42.254 "method": "bdev_nvme_attach_controller" 00:26:42.254 }, 00:26:42.254 { 00:26:42.254 "method": "bdev_wait_for_examine" 00:26:42.254 } 00:26:42.254 ] 00:26:42.254 } 00:26:42.254 ] 00:26:42.254 } 00:26:42.254 [2024-07-24 10:52:08.871524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.513 [2024-07-24 10:52:08.945100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.772  Copying: 60/60 [kB] (average 58 MBps) 00:26:42.772 00:26:42.772 10:52:09 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:26:42.772 10:52:09 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:42.772 10:52:09 -- dd/common.sh@31 -- # xtrace_disable 00:26:42.772 10:52:09 -- common/autotest_common.sh@10 -- # set +x 00:26:42.772 [2024-07-24 10:52:09.452079] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:42.772 [2024-07-24 10:52:09.452306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144590 ] 00:26:43.031 { 00:26:43.031 "subsystems": [ 00:26:43.031 { 00:26:43.031 "subsystem": "bdev", 00:26:43.031 "config": [ 00:26:43.031 { 00:26:43.031 "params": { 00:26:43.031 "trtype": "pcie", 00:26:43.031 "traddr": "0000:00:06.0", 00:26:43.031 "name": "Nvme0" 00:26:43.031 }, 00:26:43.031 "method": "bdev_nvme_attach_controller" 00:26:43.031 }, 00:26:43.031 { 00:26:43.031 "method": "bdev_wait_for_examine" 00:26:43.031 } 00:26:43.031 ] 00:26:43.031 } 00:26:43.031 ] 00:26:43.031 } 00:26:43.031 [2024-07-24 10:52:09.597200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.031 [2024-07-24 10:52:09.682100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.548  Copying: 60/60 [kB] (average 58 MBps) 00:26:43.548 00:26:43.548 10:52:10 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:43.548 10:52:10 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:43.548 10:52:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:43.548 10:52:10 -- dd/common.sh@11 -- # local nvme_ref= 00:26:43.548 10:52:10 -- dd/common.sh@12 -- # local size=61440 00:26:43.548 10:52:10 -- dd/common.sh@14 -- # local bs=1048576 00:26:43.548 10:52:10 -- dd/common.sh@15 -- # local count=1 00:26:43.548 10:52:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:43.548 10:52:10 -- dd/common.sh@18 -- # gen_conf 00:26:43.548 10:52:10 -- dd/common.sh@31 -- # xtrace_disable 00:26:43.548 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:26:43.548 { 00:26:43.548 "subsystems": [ 00:26:43.548 { 00:26:43.548 "subsystem": "bdev", 00:26:43.548 "config": [ 00:26:43.548 { 00:26:43.548 "params": { 00:26:43.548 "trtype": "pcie", 00:26:43.548 "traddr": "0000:00:06.0", 00:26:43.548 "name": "Nvme0" 00:26:43.548 }, 00:26:43.548 "method": "bdev_nvme_attach_controller" 00:26:43.548 }, 00:26:43.548 { 00:26:43.548 "method": "bdev_wait_for_examine" 00:26:43.548 } 00:26:43.548 ] 00:26:43.548 } 00:26:43.548 ] 00:26:43.548 } 00:26:43.548 [2024-07-24 10:52:10.204368] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:43.548 [2024-07-24 10:52:10.204605] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144604 ] 00:26:43.807 [2024-07-24 10:52:10.351284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.807 [2024-07-24 10:52:10.427338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.325  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:44.325 00:26:44.325 10:52:10 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:44.325 10:52:10 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:44.325 10:52:10 -- dd/basic_rw.sh@23 -- # count=7 00:26:44.325 10:52:10 -- dd/basic_rw.sh@24 -- # count=7 00:26:44.325 10:52:10 -- dd/basic_rw.sh@25 -- # size=57344 00:26:44.325 10:52:10 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:44.325 10:52:10 -- dd/common.sh@98 -- # xtrace_disable 00:26:44.325 10:52:10 -- common/autotest_common.sh@10 -- # set +x 00:26:44.892 10:52:11 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:26:44.892 10:52:11 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:44.892 10:52:11 -- dd/common.sh@31 -- # xtrace_disable 00:26:44.892 10:52:11 -- common/autotest_common.sh@10 -- # set +x 00:26:44.892 [2024-07-24 10:52:11.486284] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:44.892 [2024-07-24 10:52:11.486556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144631 ] 00:26:44.892 { 00:26:44.892 "subsystems": [ 00:26:44.892 { 00:26:44.892 "subsystem": "bdev", 00:26:44.892 "config": [ 00:26:44.892 { 00:26:44.892 "params": { 00:26:44.892 "trtype": "pcie", 00:26:44.892 "traddr": "0000:00:06.0", 00:26:44.892 "name": "Nvme0" 00:26:44.892 }, 00:26:44.892 "method": "bdev_nvme_attach_controller" 00:26:44.892 }, 00:26:44.892 { 00:26:44.892 "method": "bdev_wait_for_examine" 00:26:44.892 } 00:26:44.892 ] 00:26:44.892 } 00:26:44.892 ] 00:26:44.892 } 00:26:45.157 [2024-07-24 10:52:11.639858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.157 [2024-07-24 10:52:11.723752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.683  Copying: 56/56 [kB] (average 54 MBps) 00:26:45.683 00:26:45.683 10:52:12 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:26:45.683 10:52:12 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:45.683 10:52:12 -- dd/common.sh@31 -- # xtrace_disable 00:26:45.683 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:26:45.683 [2024-07-24 10:52:12.240537] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:45.683 [2024-07-24 10:52:12.240828] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144643 ] 00:26:45.683 { 00:26:45.683 "subsystems": [ 00:26:45.683 { 00:26:45.683 "subsystem": "bdev", 00:26:45.683 "config": [ 00:26:45.683 { 00:26:45.683 "params": { 00:26:45.683 "trtype": "pcie", 00:26:45.683 "traddr": "0000:00:06.0", 00:26:45.683 "name": "Nvme0" 00:26:45.683 }, 00:26:45.683 "method": "bdev_nvme_attach_controller" 00:26:45.683 }, 00:26:45.683 { 00:26:45.683 "method": "bdev_wait_for_examine" 00:26:45.683 } 00:26:45.683 ] 00:26:45.683 } 00:26:45.683 ] 00:26:45.683 } 00:26:45.941 [2024-07-24 10:52:12.393867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.941 [2024-07-24 10:52:12.491573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.459  Copying: 56/56 [kB] (average 27 MBps) 00:26:46.459 00:26:46.459 10:52:12 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:46.459 10:52:12 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:46.459 10:52:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:46.459 10:52:12 -- dd/common.sh@11 -- # local nvme_ref= 00:26:46.459 10:52:12 -- dd/common.sh@12 -- # local size=57344 00:26:46.459 10:52:12 -- dd/common.sh@14 -- # local bs=1048576 00:26:46.459 10:52:12 -- dd/common.sh@15 -- # local count=1 00:26:46.459 10:52:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:46.459 10:52:12 -- dd/common.sh@18 -- # gen_conf 00:26:46.459 10:52:12 -- dd/common.sh@31 -- # xtrace_disable 00:26:46.459 10:52:12 -- common/autotest_common.sh@10 -- # set +x 00:26:46.459 [2024-07-24 10:52:13.021214] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:46.459 [2024-07-24 10:52:13.021488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144660 ] 00:26:46.459 { 00:26:46.459 "subsystems": [ 00:26:46.459 { 00:26:46.459 "subsystem": "bdev", 00:26:46.459 "config": [ 00:26:46.459 { 00:26:46.459 "params": { 00:26:46.459 "trtype": "pcie", 00:26:46.459 "traddr": "0000:00:06.0", 00:26:46.459 "name": "Nvme0" 00:26:46.459 }, 00:26:46.459 "method": "bdev_nvme_attach_controller" 00:26:46.459 }, 00:26:46.459 { 00:26:46.459 "method": "bdev_wait_for_examine" 00:26:46.459 } 00:26:46.459 ] 00:26:46.459 } 00:26:46.459 ] 00:26:46.459 } 00:26:46.718 [2024-07-24 10:52:13.168552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.718 [2024-07-24 10:52:13.250170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.282  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:47.282 00:26:47.282 10:52:13 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:47.282 10:52:13 -- dd/basic_rw.sh@23 -- # count=7 00:26:47.282 10:52:13 -- dd/basic_rw.sh@24 -- # count=7 00:26:47.282 10:52:13 -- dd/basic_rw.sh@25 -- # size=57344 00:26:47.282 10:52:13 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:47.282 10:52:13 -- dd/common.sh@98 -- # xtrace_disable 00:26:47.282 10:52:13 -- common/autotest_common.sh@10 -- # set +x 00:26:47.849 10:52:14 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:26:47.849 10:52:14 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:47.849 10:52:14 -- dd/common.sh@31 -- # xtrace_disable 00:26:47.849 10:52:14 -- common/autotest_common.sh@10 -- # set +x 00:26:47.849 [2024-07-24 10:52:14.278333] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:47.850 [2024-07-24 10:52:14.278560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144680 ] 00:26:47.850 { 00:26:47.850 "subsystems": [ 00:26:47.850 { 00:26:47.850 "subsystem": "bdev", 00:26:47.850 "config": [ 00:26:47.850 { 00:26:47.850 "params": { 00:26:47.850 "trtype": "pcie", 00:26:47.850 "traddr": "0000:00:06.0", 00:26:47.850 "name": "Nvme0" 00:26:47.850 }, 00:26:47.850 "method": "bdev_nvme_attach_controller" 00:26:47.850 }, 00:26:47.850 { 00:26:47.850 "method": "bdev_wait_for_examine" 00:26:47.850 } 00:26:47.850 ] 00:26:47.850 } 00:26:47.850 ] 00:26:47.850 } 00:26:47.850 [2024-07-24 10:52:14.426226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.850 [2024-07-24 10:52:14.486042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.366  Copying: 56/56 [kB] (average 54 MBps) 00:26:48.366 00:26:48.366 10:52:14 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:26:48.366 10:52:14 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:48.366 10:52:14 -- dd/common.sh@31 -- # xtrace_disable 00:26:48.366 10:52:14 -- common/autotest_common.sh@10 -- # set +x 00:26:48.366 [2024-07-24 10:52:14.972237] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:48.366 [2024-07-24 10:52:14.972418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144700 ] 00:26:48.366 { 00:26:48.366 "subsystems": [ 00:26:48.366 { 00:26:48.366 "subsystem": "bdev", 00:26:48.366 "config": [ 00:26:48.366 { 00:26:48.366 "params": { 00:26:48.366 "trtype": "pcie", 00:26:48.366 "traddr": "0000:00:06.0", 00:26:48.366 "name": "Nvme0" 00:26:48.366 }, 00:26:48.366 "method": "bdev_nvme_attach_controller" 00:26:48.366 }, 00:26:48.366 { 00:26:48.366 "method": "bdev_wait_for_examine" 00:26:48.366 } 00:26:48.366 ] 00:26:48.366 } 00:26:48.366 ] 00:26:48.366 } 00:26:48.624 [2024-07-24 10:52:15.112624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.624 [2024-07-24 10:52:15.180656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.141  Copying: 56/56 [kB] (average 54 MBps) 00:26:49.141 00:26:49.141 10:52:15 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:49.141 10:52:15 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:49.141 10:52:15 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:49.141 10:52:15 -- dd/common.sh@11 -- # local nvme_ref= 00:26:49.141 10:52:15 -- dd/common.sh@12 -- # local size=57344 00:26:49.141 10:52:15 -- dd/common.sh@14 -- # local bs=1048576 00:26:49.141 10:52:15 -- dd/common.sh@15 -- # local count=1 00:26:49.141 10:52:15 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:49.141 10:52:15 -- dd/common.sh@18 -- # gen_conf 00:26:49.141 10:52:15 -- dd/common.sh@31 -- # xtrace_disable 00:26:49.141 10:52:15 -- common/autotest_common.sh@10 -- # set +x 00:26:49.141 [2024-07-24 10:52:15.683846] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:49.141 [2024-07-24 10:52:15.684114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144716 ] 00:26:49.141 { 00:26:49.141 "subsystems": [ 00:26:49.141 { 00:26:49.141 "subsystem": "bdev", 00:26:49.141 "config": [ 00:26:49.141 { 00:26:49.141 "params": { 00:26:49.141 "trtype": "pcie", 00:26:49.141 "traddr": "0000:00:06.0", 00:26:49.141 "name": "Nvme0" 00:26:49.141 }, 00:26:49.141 "method": "bdev_nvme_attach_controller" 00:26:49.141 }, 00:26:49.141 { 00:26:49.141 "method": "bdev_wait_for_examine" 00:26:49.141 } 00:26:49.141 ] 00:26:49.141 } 00:26:49.141 ] 00:26:49.141 } 00:26:49.399 [2024-07-24 10:52:15.833610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.399 [2024-07-24 10:52:15.914050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.658  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:49.658 00:26:49.917 10:52:16 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:49.917 10:52:16 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:49.917 10:52:16 -- dd/basic_rw.sh@23 -- # count=3 00:26:49.917 10:52:16 -- dd/basic_rw.sh@24 -- # count=3 00:26:49.917 10:52:16 -- dd/basic_rw.sh@25 -- # size=49152 00:26:49.917 10:52:16 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:49.917 10:52:16 -- dd/common.sh@98 -- # xtrace_disable 00:26:49.917 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:26:50.187 10:52:16 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:26:50.187 10:52:16 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:50.187 10:52:16 -- dd/common.sh@31 -- # xtrace_disable 00:26:50.187 10:52:16 -- common/autotest_common.sh@10 -- # set +x 00:26:50.188 [2024-07-24 10:52:16.814449] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:50.188 [2024-07-24 10:52:16.814690] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144736 ] 00:26:50.188 { 00:26:50.188 "subsystems": [ 00:26:50.188 { 00:26:50.188 "subsystem": "bdev", 00:26:50.188 "config": [ 00:26:50.188 { 00:26:50.188 "params": { 00:26:50.188 "trtype": "pcie", 00:26:50.188 "traddr": "0000:00:06.0", 00:26:50.188 "name": "Nvme0" 00:26:50.188 }, 00:26:50.188 "method": "bdev_nvme_attach_controller" 00:26:50.188 }, 00:26:50.188 { 00:26:50.188 "method": "bdev_wait_for_examine" 00:26:50.188 } 00:26:50.188 ] 00:26:50.188 } 00:26:50.188 ] 00:26:50.188 } 00:26:50.460 [2024-07-24 10:52:16.962137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.460 [2024-07-24 10:52:17.036893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.979  Copying: 48/48 [kB] (average 46 MBps) 00:26:50.979 00:26:50.979 10:52:17 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:26:50.979 10:52:17 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:50.979 10:52:17 -- dd/common.sh@31 -- # xtrace_disable 00:26:50.979 10:52:17 -- common/autotest_common.sh@10 -- # set +x 00:26:50.979 [2024-07-24 10:52:17.571916] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:50.979 [2024-07-24 10:52:17.572134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144756 ] 00:26:50.979 { 00:26:50.979 "subsystems": [ 00:26:50.979 { 00:26:50.979 "subsystem": "bdev", 00:26:50.979 "config": [ 00:26:50.979 { 00:26:50.979 "params": { 00:26:50.979 "trtype": "pcie", 00:26:50.979 "traddr": "0000:00:06.0", 00:26:50.979 "name": "Nvme0" 00:26:50.979 }, 00:26:50.979 "method": "bdev_nvme_attach_controller" 00:26:50.979 }, 00:26:50.979 { 00:26:50.979 "method": "bdev_wait_for_examine" 00:26:50.979 } 00:26:50.979 ] 00:26:50.979 } 00:26:50.979 ] 00:26:50.979 } 00:26:51.238 [2024-07-24 10:52:17.714928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.238 [2024-07-24 10:52:17.800257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.755  Copying: 48/48 [kB] (average 46 MBps) 00:26:51.755 00:26:51.755 10:52:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:51.755 10:52:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:51.755 10:52:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:51.755 10:52:18 -- dd/common.sh@11 -- # local nvme_ref= 00:26:51.755 10:52:18 -- dd/common.sh@12 -- # local size=49152 00:26:51.755 10:52:18 -- dd/common.sh@14 -- # local bs=1048576 00:26:51.755 10:52:18 -- dd/common.sh@15 -- # local count=1 00:26:51.755 10:52:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:51.755 10:52:18 -- dd/common.sh@18 -- # gen_conf 00:26:51.755 10:52:18 -- dd/common.sh@31 -- # xtrace_disable 00:26:51.755 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:26:51.755 [2024-07-24 10:52:18.314738] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:51.755 [2024-07-24 10:52:18.315943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144765 ] 00:26:51.755 { 00:26:51.755 "subsystems": [ 00:26:51.755 { 00:26:51.755 "subsystem": "bdev", 00:26:51.755 "config": [ 00:26:51.755 { 00:26:51.755 "params": { 00:26:51.755 "trtype": "pcie", 00:26:51.755 "traddr": "0000:00:06.0", 00:26:51.755 "name": "Nvme0" 00:26:51.755 }, 00:26:51.755 "method": "bdev_nvme_attach_controller" 00:26:51.755 }, 00:26:51.755 { 00:26:51.755 "method": "bdev_wait_for_examine" 00:26:51.755 } 00:26:51.755 ] 00:26:51.755 } 00:26:51.755 ] 00:26:51.755 } 00:26:52.014 [2024-07-24 10:52:18.468002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.014 [2024-07-24 10:52:18.542742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.585  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:52.585 00:26:52.585 10:52:18 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:52.585 10:52:18 -- dd/basic_rw.sh@23 -- # count=3 00:26:52.585 10:52:18 -- dd/basic_rw.sh@24 -- # count=3 00:26:52.585 10:52:18 -- dd/basic_rw.sh@25 -- # size=49152 00:26:52.585 10:52:18 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:52.585 10:52:18 -- dd/common.sh@98 -- # xtrace_disable 00:26:52.585 10:52:18 -- common/autotest_common.sh@10 -- # set +x 00:26:52.844 10:52:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:26:52.844 10:52:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:52.844 10:52:19 -- dd/common.sh@31 -- # xtrace_disable 00:26:52.844 10:52:19 -- common/autotest_common.sh@10 -- # set +x 00:26:52.844 [2024-07-24 10:52:19.478464] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:52.844 [2024-07-24 10:52:19.479360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144794 ] 00:26:52.844 { 00:26:52.844 "subsystems": [ 00:26:52.844 { 00:26:52.844 "subsystem": "bdev", 00:26:52.844 "config": [ 00:26:52.844 { 00:26:52.844 "params": { 00:26:52.844 "trtype": "pcie", 00:26:52.844 "traddr": "0000:00:06.0", 00:26:52.844 "name": "Nvme0" 00:26:52.844 }, 00:26:52.844 "method": "bdev_nvme_attach_controller" 00:26:52.844 }, 00:26:52.844 { 00:26:52.844 "method": "bdev_wait_for_examine" 00:26:52.844 } 00:26:52.844 ] 00:26:52.844 } 00:26:52.844 ] 00:26:52.844 } 00:26:53.103 [2024-07-24 10:52:19.623480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.103 [2024-07-24 10:52:19.711673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.620  Copying: 48/48 [kB] (average 46 MBps) 00:26:53.620 00:26:53.620 10:52:20 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:26:53.620 10:52:20 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:53.620 10:52:20 -- dd/common.sh@31 -- # xtrace_disable 00:26:53.620 10:52:20 -- common/autotest_common.sh@10 -- # set +x 00:26:53.620 [2024-07-24 10:52:20.245226] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:53.620 [2024-07-24 10:52:20.245503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144807 ] 00:26:53.620 { 00:26:53.620 "subsystems": [ 00:26:53.620 { 00:26:53.620 "subsystem": "bdev", 00:26:53.620 "config": [ 00:26:53.620 { 00:26:53.620 "params": { 00:26:53.620 "trtype": "pcie", 00:26:53.620 "traddr": "0000:00:06.0", 00:26:53.620 "name": "Nvme0" 00:26:53.620 }, 00:26:53.620 "method": "bdev_nvme_attach_controller" 00:26:53.620 }, 00:26:53.620 { 00:26:53.620 "method": "bdev_wait_for_examine" 00:26:53.620 } 00:26:53.620 ] 00:26:53.620 } 00:26:53.620 ] 00:26:53.620 } 00:26:53.879 [2024-07-24 10:52:20.396512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.879 [2024-07-24 10:52:20.497741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.399  Copying: 48/48 [kB] (average 46 MBps) 00:26:54.399 00:26:54.399 10:52:20 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:54.399 10:52:20 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:54.399 10:52:20 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:54.399 10:52:20 -- dd/common.sh@11 -- # local nvme_ref= 00:26:54.399 10:52:20 -- dd/common.sh@12 -- # local size=49152 00:26:54.399 10:52:20 -- dd/common.sh@14 -- # local bs=1048576 00:26:54.399 10:52:20 -- dd/common.sh@15 -- # local count=1 00:26:54.399 10:52:20 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:54.399 10:52:20 -- dd/common.sh@18 -- # gen_conf 00:26:54.399 10:52:20 -- dd/common.sh@31 -- # xtrace_disable 00:26:54.399 10:52:20 -- common/autotest_common.sh@10 -- # set +x 00:26:54.399 [2024-07-24 10:52:21.030191] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:54.399 { 00:26:54.399 "subsystems": [ 00:26:54.399 { 00:26:54.399 "subsystem": "bdev", 00:26:54.399 "config": [ 00:26:54.399 { 00:26:54.399 "params": { 00:26:54.399 "trtype": "pcie", 00:26:54.399 "traddr": "0000:00:06.0", 00:26:54.399 "name": "Nvme0" 00:26:54.399 }, 00:26:54.399 "method": "bdev_nvme_attach_controller" 00:26:54.400 }, 00:26:54.400 { 00:26:54.400 "method": "bdev_wait_for_examine" 00:26:54.400 } 00:26:54.400 ] 00:26:54.400 } 00:26:54.400 ] 00:26:54.400 } 00:26:54.400 [2024-07-24 10:52:21.031100] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144827 ] 00:26:54.658 [2024-07-24 10:52:21.180134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.658 [2024-07-24 10:52:21.279687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.176  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:55.176 00:26:55.176 00:26:55.176 real 0m16.514s 00:26:55.176 user 0m11.229s 00:26:55.176 sys 0m3.883s 00:26:55.176 10:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.176 ************************************ 00:26:55.176 END TEST dd_rw 00:26:55.176 10:52:21 -- common/autotest_common.sh@10 -- # set +x 00:26:55.176 ************************************ 00:26:55.177 10:52:21 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:26:55.177 10:52:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:55.177 10:52:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:55.177 10:52:21 -- common/autotest_common.sh@10 -- # set +x 00:26:55.177 ************************************ 00:26:55.177 START TEST dd_rw_offset 00:26:55.177 ************************************ 00:26:55.177 10:52:21 -- common/autotest_common.sh@1104 -- # basic_offset 00:26:55.177 10:52:21 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:26:55.177 10:52:21 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:26:55.177 10:52:21 -- dd/common.sh@98 -- # xtrace_disable 00:26:55.177 10:52:21 -- common/autotest_common.sh@10 -- # set +x 00:26:55.177 10:52:21 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:26:55.177 10:52:21 -- dd/basic_rw.sh@56 -- # data=m1n340ljon3n66l9zm8ki89zlm02m7h58y5aqmtcx3gbuhbd2xm177z4hlo0uh5zcm9dyda8rgxdg3n0n8nkgjcyqd9zdqscr9f339mh620vy7duqmpegivglcllciyxv5egheect3qevjawfdgy9gygphcskoxei6v5j5e3r9jyq10cissyo7zmvfs8sw28etcy6aau0fsy4bisftqkknf9nipe01ejobddl0wtaptdxk4oas5mmq7h19mlcenkjqj9ca0zw7qb2halgw3duylen2yr4utt1k0x96ksl1qabtcreul4e7megvq8ob7ka8vzc4kut7ne6f1ocubb77oxkwshvs9p8gjg0vuafy6s3m2ka80aikyt7nz37zkatjk9lexp4dt4e0oga8cuzwo8ly8g4qnv2890l1pyrrb4aji9uk224ttag7js9r89hczyaeezxzej8emvta2r0jbo50jc898f3ehkijynm0q5jmvkfjjjpj4mzdqq5vjv5zcaop1qn5r61zh3orjlxijbti70w9caekfvgqrqgpgo0rjcajt8ru8avemut3q8c1vcdbhp1clp38axsm4oiu8zecttmqh65vqzb6yz1klb3axf5ewkdmokn44cs7ruxwgs9mv6lpc42qeox5x84qsifyseb0mv48qpjz4hca5fu1pw96q2qd7l0iylf5rpnn4dtbt1cx3exfvkbg4vwcrol956mzu1wmwq0m2ztukeh5sjrkik01wyctmnrb59udq5bah0jck0wkm420aa3ut0rtl1sb1etev2sap9f96dl0ud5di6rt6ypvb6n9mfoy1n25m54hh8az50qzgg95ojsca5ed29v9rz0wkmnqriuxm0j24t26ghbe9uom1usyi51bg2lhb8otmubbyh5a98il3yvrds976qzhank82llgxzcp8cpwd4uk30ewoot35ddxpds2borl8qb5cyeavzmgjng2kf1gcjq67guwteodng238l89njx23o3vdjoqabqx8wbb05sd52sgkk7ws0kp9oqxt7q9st8a32s3tslsl733dffq2vrpmt9d63ngo3ivxg51yscllh8uejl87azlakguogcdr7hu74a61hncs0ypo4sfjrtmtshvxxznk2dnfv1qvs4db2s3xcoj82qzdi1j6eppub78mq5hjs8tmqnjvb9u4dwfs57fbk47bvxgnz99onkfced9cv12qju98o11oy23cagxvfrun9bg05kei5mrc9ly3eg7jsgdqebba97hdi3rbtf6cg5ksvh57fi0w641yqf30v5exlhh9bt9ml4dge8n4ztasuezzmqyh97cqlvxr2w1qr2h2adil087q92ambgg9mwxu1ctejvh6pg1x2se3tgk7rq1egv1r20e8fgrw7xe2fbr5jmx500hmycqfqi5skjop25t87zai4vu17svgsabuaemdn5sp1hkxerbyafkn1kq5c6yxb1kq2hfpwoptgybzopxyv9cqbgiu680bay5vpg9z9enssiyi07g7jj2bdiwxzuafbuziha71bi2oayfwotokpm25r53vj1omdbn876ab8iz0yr99pkxunbvftvhgp9o2bf8i41h1jnm8u3370k04huos1dyfcgp327v4rh45bgdwh6va3ob98g131h6pc2eyp3ymlobr6cwm2onsqb2u2aq7uuvwmb1h6dcaa9z4zzyz9wso51v51sonjqcfsfpsa9x981sq84i5qh1ef5yk2c9629h1mfzgzzw2jajoa4r2biuy5xkytuomesqj5ot1cjfbprwblbqfvtvon50onlijdygm7evmv76ghubpv0wt4125a2lmdz6ezrr6oizwdycyw4hl1eh587ezh7rgp981t9f5fhznqmi0bd6hqt6l57todaubxtim1ba7aia60p6ddx8vspcr1d9a53rs8wdzlpietboqtmes5msn4m5y0a6zxy5pf57c86e606ygkisg4vj0h89wvuld3i44to9qbiqamljqbo5xupgtoq1ekj69tgi6xizhmchw2kgskme3s9miuy6w1c6gdiey3oqo7baup7aijthrjjtmgfo2xldwmmcfrh82e4kkkgir844o7dsvvgf86znuhfxnlfn5cm3g9cnugxhavcwjpgickrjapvr6xdgxciba8bxnftk7eo5y3ph1o3ohczsig4wga21tsyn9ix239o71w0ter42y8mepfd0ksvflastlsfyoku6lgj1k5irni8hjt12eiowmwwmpu5mf8sp5vn0ddm8y1425hlqr2cykhdru78d9xvqqupm2wsvoutcquv0erdt1xzvvvfjtqqh9v23s34tlezk83liaarw1my60w1qg1nd241p9eystr2vchfvahdnpm5q3cuhwvj6ni6cd15vswwyfb73ff8p02bd63872pk3n2lpbwbv4bnvr0n3l24609x0uamcyezl98sv7k75z7q6aogfv6fg89yisgo1aptmj001jn44crudu3ipvg5z62jnrp9zdcum62qkvguycg2xlppabfmuj8q0da2r3p5dx31og9f81e44ssmliyg4qqhobzkpt4m85uv5a3jnyo16d7437cmi5ejot8m39fpdqya3ne5pxvsylfod93atk40bp6nxa9cqz8vix7fbp8idby86bd3ka57nr999pjra47okw50gshm7cw0082xr4hj4tj1ontbc3dbscb4kkmlt3cvotv0lo4ae2gf9mj9jffaa81rqljpo5327vxjsxj6ylylgnu5tgopveqt9lb1i83hrutiejksgxrpwy5nudxz4aitc12um0voaahyo10o4jhrb279og79szka0puf036l6eu100i3hg5q393iuifxmqu8wm096ou7avikvcc17xnrruunz8ujf7latimwy92lsstum5nyrowegza1031hiaxumamqjc3o06uu31nkhr5gwsjb811tc85m1lgnyvrg6q90icx6nbslolalldr8wzhz01d3w0y5id3sd6vys08c9rxnhwouwy3rmlirm7s8d5bh7iaq18sfm0kv0ocwlyznkaf7grzfvjtvx233rm5gb99k4kqjvkgjdparqraxfk6zzel07m0618ihxih0zmlx6iw077phgzlyqrmg9zw3h7dqdqmv6j2yfpgtj63ukd3hxarhrp7yx4golg54ytbemm428n5bvwkt7y5ltd54zq75s0h3jqrszbfk455btj8pzqqkapyokz3pas28xuyl1vxgw5ykx8jf33911z49togvh0grpw24qyfjlnprvgabd70j95zmcxzv4ymnhhxufpnus05ipr0qtqgbl9nhfocm41nme5f8egf59x2v301lu8cf6a42v9ea4n62u4se8cssv0c35bnieodsbzdxy63km9xxb2t48e85e0t8ieddb6p5jzzwghanlbxi8g17sq9biej7rcg31paxzf8jbzvkucucml6q7jgvwu37szhiis5yd2vrxhx00vd7nta3pxglnzreltyn02zgnzni1foh4kp33hghjbetyirumw0kazkcarxufzyazc17gttlcizp7qywqmyepzxxu04xlu52x2rhcwvajh54etbf4bf7ugyiy7roitr5ej6tfv3pvmdpt9v1g9fcvb0flg6teohg4dgn6l3nz4xt1fis7udn5c19qv4lcmdaqv0891juv6wogyfkpxzxt5e92fxn3enfgchyqmw4zk5zi0mvg28berv33og30phvwgzf9voi2ajkxkjkbgycf0j62ed2zxvd584ktyieez0yraed4fizkyyhpxxe8856japdvhesg4ckmoy2p1hhcthl0at12xahzsqavos2xvzoipmls6p2142b2f8ecbbzrbswk0kdkbnglgztx2oa94ct29yioy2m0it3bwjdpzp04gvefx765gq80pe9t335zlq7kpm2gm5jnrp9xqt0xj0gp8sv4dkodc71mnuls8yu25t0x6rx8wa7ryrm96smf6zyqdouvoxe7c75ykhhwi7j24yvb6w 00:26:55.177 10:52:21 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:26:55.177 10:52:21 -- dd/basic_rw.sh@59 -- # gen_conf 00:26:55.177 10:52:21 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.177 10:52:21 -- common/autotest_common.sh@10 -- # set +x 00:26:55.436 [2024-07-24 10:52:21.900584] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:55.436 [2024-07-24 10:52:21.901820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144863 ] 00:26:55.436 { 00:26:55.436 "subsystems": [ 00:26:55.436 { 00:26:55.436 "subsystem": "bdev", 00:26:55.436 "config": [ 00:26:55.436 { 00:26:55.436 "params": { 00:26:55.436 "trtype": "pcie", 00:26:55.436 "traddr": "0000:00:06.0", 00:26:55.436 "name": "Nvme0" 00:26:55.436 }, 00:26:55.436 "method": "bdev_nvme_attach_controller" 00:26:55.436 }, 00:26:55.436 { 00:26:55.436 "method": "bdev_wait_for_examine" 00:26:55.436 } 00:26:55.436 ] 00:26:55.436 } 00:26:55.436 ] 00:26:55.436 } 00:26:55.436 [2024-07-24 10:52:22.065374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.695 [2024-07-24 10:52:22.158029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.954  Copying: 4096/4096 [B] (average 4000 kBps) 00:26:55.954 00:26:55.954 10:52:22 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:26:55.954 10:52:22 -- dd/basic_rw.sh@65 -- # gen_conf 00:26:55.954 10:52:22 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.954 10:52:22 -- common/autotest_common.sh@10 -- # set +x 00:26:56.213 [2024-07-24 10:52:22.685103] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:56.213 [2024-07-24 10:52:22.685350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144887 ] 00:26:56.213 { 00:26:56.213 "subsystems": [ 00:26:56.213 { 00:26:56.213 "subsystem": "bdev", 00:26:56.213 "config": [ 00:26:56.213 { 00:26:56.213 "params": { 00:26:56.213 "trtype": "pcie", 00:26:56.213 "traddr": "0000:00:06.0", 00:26:56.213 "name": "Nvme0" 00:26:56.213 }, 00:26:56.213 "method": "bdev_nvme_attach_controller" 00:26:56.213 }, 00:26:56.213 { 00:26:56.213 "method": "bdev_wait_for_examine" 00:26:56.213 } 00:26:56.213 ] 00:26:56.213 } 00:26:56.213 ] 00:26:56.213 } 00:26:56.213 [2024-07-24 10:52:22.832975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.471 [2024-07-24 10:52:22.924285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.730  Copying: 4096/4096 [B] (average 4000 kBps) 00:26:56.730 00:26:56.730 10:52:23 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:26:56.731 10:52:23 -- dd/basic_rw.sh@72 -- # [[ m1n340ljon3n66l9zm8ki89zlm02m7h58y5aqmtcx3gbuhbd2xm177z4hlo0uh5zcm9dyda8rgxdg3n0n8nkgjcyqd9zdqscr9f339mh620vy7duqmpegivglcllciyxv5egheect3qevjawfdgy9gygphcskoxei6v5j5e3r9jyq10cissyo7zmvfs8sw28etcy6aau0fsy4bisftqkknf9nipe01ejobddl0wtaptdxk4oas5mmq7h19mlcenkjqj9ca0zw7qb2halgw3duylen2yr4utt1k0x96ksl1qabtcreul4e7megvq8ob7ka8vzc4kut7ne6f1ocubb77oxkwshvs9p8gjg0vuafy6s3m2ka80aikyt7nz37zkatjk9lexp4dt4e0oga8cuzwo8ly8g4qnv2890l1pyrrb4aji9uk224ttag7js9r89hczyaeezxzej8emvta2r0jbo50jc898f3ehkijynm0q5jmvkfjjjpj4mzdqq5vjv5zcaop1qn5r61zh3orjlxijbti70w9caekfvgqrqgpgo0rjcajt8ru8avemut3q8c1vcdbhp1clp38axsm4oiu8zecttmqh65vqzb6yz1klb3axf5ewkdmokn44cs7ruxwgs9mv6lpc42qeox5x84qsifyseb0mv48qpjz4hca5fu1pw96q2qd7l0iylf5rpnn4dtbt1cx3exfvkbg4vwcrol956mzu1wmwq0m2ztukeh5sjrkik01wyctmnrb59udq5bah0jck0wkm420aa3ut0rtl1sb1etev2sap9f96dl0ud5di6rt6ypvb6n9mfoy1n25m54hh8az50qzgg95ojsca5ed29v9rz0wkmnqriuxm0j24t26ghbe9uom1usyi51bg2lhb8otmubbyh5a98il3yvrds976qzhank82llgxzcp8cpwd4uk30ewoot35ddxpds2borl8qb5cyeavzmgjng2kf1gcjq67guwteodng238l89njx23o3vdjoqabqx8wbb05sd52sgkk7ws0kp9oqxt7q9st8a32s3tslsl733dffq2vrpmt9d63ngo3ivxg51yscllh8uejl87azlakguogcdr7hu74a61hncs0ypo4sfjrtmtshvxxznk2dnfv1qvs4db2s3xcoj82qzdi1j6eppub78mq5hjs8tmqnjvb9u4dwfs57fbk47bvxgnz99onkfced9cv12qju98o11oy23cagxvfrun9bg05kei5mrc9ly3eg7jsgdqebba97hdi3rbtf6cg5ksvh57fi0w641yqf30v5exlhh9bt9ml4dge8n4ztasuezzmqyh97cqlvxr2w1qr2h2adil087q92ambgg9mwxu1ctejvh6pg1x2se3tgk7rq1egv1r20e8fgrw7xe2fbr5jmx500hmycqfqi5skjop25t87zai4vu17svgsabuaemdn5sp1hkxerbyafkn1kq5c6yxb1kq2hfpwoptgybzopxyv9cqbgiu680bay5vpg9z9enssiyi07g7jj2bdiwxzuafbuziha71bi2oayfwotokpm25r53vj1omdbn876ab8iz0yr99pkxunbvftvhgp9o2bf8i41h1jnm8u3370k04huos1dyfcgp327v4rh45bgdwh6va3ob98g131h6pc2eyp3ymlobr6cwm2onsqb2u2aq7uuvwmb1h6dcaa9z4zzyz9wso51v51sonjqcfsfpsa9x981sq84i5qh1ef5yk2c9629h1mfzgzzw2jajoa4r2biuy5xkytuomesqj5ot1cjfbprwblbqfvtvon50onlijdygm7evmv76ghubpv0wt4125a2lmdz6ezrr6oizwdycyw4hl1eh587ezh7rgp981t9f5fhznqmi0bd6hqt6l57todaubxtim1ba7aia60p6ddx8vspcr1d9a53rs8wdzlpietboqtmes5msn4m5y0a6zxy5pf57c86e606ygkisg4vj0h89wvuld3i44to9qbiqamljqbo5xupgtoq1ekj69tgi6xizhmchw2kgskme3s9miuy6w1c6gdiey3oqo7baup7aijthrjjtmgfo2xldwmmcfrh82e4kkkgir844o7dsvvgf86znuhfxnlfn5cm3g9cnugxhavcwjpgickrjapvr6xdgxciba8bxnftk7eo5y3ph1o3ohczsig4wga21tsyn9ix239o71w0ter42y8mepfd0ksvflastlsfyoku6lgj1k5irni8hjt12eiowmwwmpu5mf8sp5vn0ddm8y1425hlqr2cykhdru78d9xvqqupm2wsvoutcquv0erdt1xzvvvfjtqqh9v23s34tlezk83liaarw1my60w1qg1nd241p9eystr2vchfvahdnpm5q3cuhwvj6ni6cd15vswwyfb73ff8p02bd63872pk3n2lpbwbv4bnvr0n3l24609x0uamcyezl98sv7k75z7q6aogfv6fg89yisgo1aptmj001jn44crudu3ipvg5z62jnrp9zdcum62qkvguycg2xlppabfmuj8q0da2r3p5dx31og9f81e44ssmliyg4qqhobzkpt4m85uv5a3jnyo16d7437cmi5ejot8m39fpdqya3ne5pxvsylfod93atk40bp6nxa9cqz8vix7fbp8idby86bd3ka57nr999pjra47okw50gshm7cw0082xr4hj4tj1ontbc3dbscb4kkmlt3cvotv0lo4ae2gf9mj9jffaa81rqljpo5327vxjsxj6ylylgnu5tgopveqt9lb1i83hrutiejksgxrpwy5nudxz4aitc12um0voaahyo10o4jhrb279og79szka0puf036l6eu100i3hg5q393iuifxmqu8wm096ou7avikvcc17xnrruunz8ujf7latimwy92lsstum5nyrowegza1031hiaxumamqjc3o06uu31nkhr5gwsjb811tc85m1lgnyvrg6q90icx6nbslolalldr8wzhz01d3w0y5id3sd6vys08c9rxnhwouwy3rmlirm7s8d5bh7iaq18sfm0kv0ocwlyznkaf7grzfvjtvx233rm5gb99k4kqjvkgjdparqraxfk6zzel07m0618ihxih0zmlx6iw077phgzlyqrmg9zw3h7dqdqmv6j2yfpgtj63ukd3hxarhrp7yx4golg54ytbemm428n5bvwkt7y5ltd54zq75s0h3jqrszbfk455btj8pzqqkapyokz3pas28xuyl1vxgw5ykx8jf33911z49togvh0grpw24qyfjlnprvgabd70j95zmcxzv4ymnhhxufpnus05ipr0qtqgbl9nhfocm41nme5f8egf59x2v301lu8cf6a42v9ea4n62u4se8cssv0c35bnieodsbzdxy63km9xxb2t48e85e0t8ieddb6p5jzzwghanlbxi8g17sq9biej7rcg31paxzf8jbzvkucucml6q7jgvwu37szhiis5yd2vrxhx00vd7nta3pxglnzreltyn02zgnzni1foh4kp33hghjbetyirumw0kazkcarxufzyazc17gttlcizp7qywqmyepzxxu04xlu52x2rhcwvajh54etbf4bf7ugyiy7roitr5ej6tfv3pvmdpt9v1g9fcvb0flg6teohg4dgn6l3nz4xt1fis7udn5c19qv4lcmdaqv0891juv6wogyfkpxzxt5e92fxn3enfgchyqmw4zk5zi0mvg28berv33og30phvwgzf9voi2ajkxkjkbgycf0j62ed2zxvd584ktyieez0yraed4fizkyyhpxxe8856japdvhesg4ckmoy2p1hhcthl0at12xahzsqavos2xvzoipmls6p2142b2f8ecbbzrbswk0kdkbnglgztx2oa94ct29yioy2m0it3bwjdpzp04gvefx765gq80pe9t335zlq7kpm2gm5jnrp9xqt0xj0gp8sv4dkodc71mnuls8yu25t0x6rx8wa7ryrm96smf6zyqdouvoxe7c75ykhhwi7j24yvb6w == \m\1\n\3\4\0\l\j\o\n\3\n\6\6\l\9\z\m\8\k\i\8\9\z\l\m\0\2\m\7\h\5\8\y\5\a\q\m\t\c\x\3\g\b\u\h\b\d\2\x\m\1\7\7\z\4\h\l\o\0\u\h\5\z\c\m\9\d\y\d\a\8\r\g\x\d\g\3\n\0\n\8\n\k\g\j\c\y\q\d\9\z\d\q\s\c\r\9\f\3\3\9\m\h\6\2\0\v\y\7\d\u\q\m\p\e\g\i\v\g\l\c\l\l\c\i\y\x\v\5\e\g\h\e\e\c\t\3\q\e\v\j\a\w\f\d\g\y\9\g\y\g\p\h\c\s\k\o\x\e\i\6\v\5\j\5\e\3\r\9\j\y\q\1\0\c\i\s\s\y\o\7\z\m\v\f\s\8\s\w\2\8\e\t\c\y\6\a\a\u\0\f\s\y\4\b\i\s\f\t\q\k\k\n\f\9\n\i\p\e\0\1\e\j\o\b\d\d\l\0\w\t\a\p\t\d\x\k\4\o\a\s\5\m\m\q\7\h\1\9\m\l\c\e\n\k\j\q\j\9\c\a\0\z\w\7\q\b\2\h\a\l\g\w\3\d\u\y\l\e\n\2\y\r\4\u\t\t\1\k\0\x\9\6\k\s\l\1\q\a\b\t\c\r\e\u\l\4\e\7\m\e\g\v\q\8\o\b\7\k\a\8\v\z\c\4\k\u\t\7\n\e\6\f\1\o\c\u\b\b\7\7\o\x\k\w\s\h\v\s\9\p\8\g\j\g\0\v\u\a\f\y\6\s\3\m\2\k\a\8\0\a\i\k\y\t\7\n\z\3\7\z\k\a\t\j\k\9\l\e\x\p\4\d\t\4\e\0\o\g\a\8\c\u\z\w\o\8\l\y\8\g\4\q\n\v\2\8\9\0\l\1\p\y\r\r\b\4\a\j\i\9\u\k\2\2\4\t\t\a\g\7\j\s\9\r\8\9\h\c\z\y\a\e\e\z\x\z\e\j\8\e\m\v\t\a\2\r\0\j\b\o\5\0\j\c\8\9\8\f\3\e\h\k\i\j\y\n\m\0\q\5\j\m\v\k\f\j\j\j\p\j\4\m\z\d\q\q\5\v\j\v\5\z\c\a\o\p\1\q\n\5\r\6\1\z\h\3\o\r\j\l\x\i\j\b\t\i\7\0\w\9\c\a\e\k\f\v\g\q\r\q\g\p\g\o\0\r\j\c\a\j\t\8\r\u\8\a\v\e\m\u\t\3\q\8\c\1\v\c\d\b\h\p\1\c\l\p\3\8\a\x\s\m\4\o\i\u\8\z\e\c\t\t\m\q\h\6\5\v\q\z\b\6\y\z\1\k\l\b\3\a\x\f\5\e\w\k\d\m\o\k\n\4\4\c\s\7\r\u\x\w\g\s\9\m\v\6\l\p\c\4\2\q\e\o\x\5\x\8\4\q\s\i\f\y\s\e\b\0\m\v\4\8\q\p\j\z\4\h\c\a\5\f\u\1\p\w\9\6\q\2\q\d\7\l\0\i\y\l\f\5\r\p\n\n\4\d\t\b\t\1\c\x\3\e\x\f\v\k\b\g\4\v\w\c\r\o\l\9\5\6\m\z\u\1\w\m\w\q\0\m\2\z\t\u\k\e\h\5\s\j\r\k\i\k\0\1\w\y\c\t\m\n\r\b\5\9\u\d\q\5\b\a\h\0\j\c\k\0\w\k\m\4\2\0\a\a\3\u\t\0\r\t\l\1\s\b\1\e\t\e\v\2\s\a\p\9\f\9\6\d\l\0\u\d\5\d\i\6\r\t\6\y\p\v\b\6\n\9\m\f\o\y\1\n\2\5\m\5\4\h\h\8\a\z\5\0\q\z\g\g\9\5\o\j\s\c\a\5\e\d\2\9\v\9\r\z\0\w\k\m\n\q\r\i\u\x\m\0\j\2\4\t\2\6\g\h\b\e\9\u\o\m\1\u\s\y\i\5\1\b\g\2\l\h\b\8\o\t\m\u\b\b\y\h\5\a\9\8\i\l\3\y\v\r\d\s\9\7\6\q\z\h\a\n\k\8\2\l\l\g\x\z\c\p\8\c\p\w\d\4\u\k\3\0\e\w\o\o\t\3\5\d\d\x\p\d\s\2\b\o\r\l\8\q\b\5\c\y\e\a\v\z\m\g\j\n\g\2\k\f\1\g\c\j\q\6\7\g\u\w\t\e\o\d\n\g\2\3\8\l\8\9\n\j\x\2\3\o\3\v\d\j\o\q\a\b\q\x\8\w\b\b\0\5\s\d\5\2\s\g\k\k\7\w\s\0\k\p\9\o\q\x\t\7\q\9\s\t\8\a\3\2\s\3\t\s\l\s\l\7\3\3\d\f\f\q\2\v\r\p\m\t\9\d\6\3\n\g\o\3\i\v\x\g\5\1\y\s\c\l\l\h\8\u\e\j\l\8\7\a\z\l\a\k\g\u\o\g\c\d\r\7\h\u\7\4\a\6\1\h\n\c\s\0\y\p\o\4\s\f\j\r\t\m\t\s\h\v\x\x\z\n\k\2\d\n\f\v\1\q\v\s\4\d\b\2\s\3\x\c\o\j\8\2\q\z\d\i\1\j\6\e\p\p\u\b\7\8\m\q\5\h\j\s\8\t\m\q\n\j\v\b\9\u\4\d\w\f\s\5\7\f\b\k\4\7\b\v\x\g\n\z\9\9\o\n\k\f\c\e\d\9\c\v\1\2\q\j\u\9\8\o\1\1\o\y\2\3\c\a\g\x\v\f\r\u\n\9\b\g\0\5\k\e\i\5\m\r\c\9\l\y\3\e\g\7\j\s\g\d\q\e\b\b\a\9\7\h\d\i\3\r\b\t\f\6\c\g\5\k\s\v\h\5\7\f\i\0\w\6\4\1\y\q\f\3\0\v\5\e\x\l\h\h\9\b\t\9\m\l\4\d\g\e\8\n\4\z\t\a\s\u\e\z\z\m\q\y\h\9\7\c\q\l\v\x\r\2\w\1\q\r\2\h\2\a\d\i\l\0\8\7\q\9\2\a\m\b\g\g\9\m\w\x\u\1\c\t\e\j\v\h\6\p\g\1\x\2\s\e\3\t\g\k\7\r\q\1\e\g\v\1\r\2\0\e\8\f\g\r\w\7\x\e\2\f\b\r\5\j\m\x\5\0\0\h\m\y\c\q\f\q\i\5\s\k\j\o\p\2\5\t\8\7\z\a\i\4\v\u\1\7\s\v\g\s\a\b\u\a\e\m\d\n\5\s\p\1\h\k\x\e\r\b\y\a\f\k\n\1\k\q\5\c\6\y\x\b\1\k\q\2\h\f\p\w\o\p\t\g\y\b\z\o\p\x\y\v\9\c\q\b\g\i\u\6\8\0\b\a\y\5\v\p\g\9\z\9\e\n\s\s\i\y\i\0\7\g\7\j\j\2\b\d\i\w\x\z\u\a\f\b\u\z\i\h\a\7\1\b\i\2\o\a\y\f\w\o\t\o\k\p\m\2\5\r\5\3\v\j\1\o\m\d\b\n\8\7\6\a\b\8\i\z\0\y\r\9\9\p\k\x\u\n\b\v\f\t\v\h\g\p\9\o\2\b\f\8\i\4\1\h\1\j\n\m\8\u\3\3\7\0\k\0\4\h\u\o\s\1\d\y\f\c\g\p\3\2\7\v\4\r\h\4\5\b\g\d\w\h\6\v\a\3\o\b\9\8\g\1\3\1\h\6\p\c\2\e\y\p\3\y\m\l\o\b\r\6\c\w\m\2\o\n\s\q\b\2\u\2\a\q\7\u\u\v\w\m\b\1\h\6\d\c\a\a\9\z\4\z\z\y\z\9\w\s\o\5\1\v\5\1\s\o\n\j\q\c\f\s\f\p\s\a\9\x\9\8\1\s\q\8\4\i\5\q\h\1\e\f\5\y\k\2\c\9\6\2\9\h\1\m\f\z\g\z\z\w\2\j\a\j\o\a\4\r\2\b\i\u\y\5\x\k\y\t\u\o\m\e\s\q\j\5\o\t\1\c\j\f\b\p\r\w\b\l\b\q\f\v\t\v\o\n\5\0\o\n\l\i\j\d\y\g\m\7\e\v\m\v\7\6\g\h\u\b\p\v\0\w\t\4\1\2\5\a\2\l\m\d\z\6\e\z\r\r\6\o\i\z\w\d\y\c\y\w\4\h\l\1\e\h\5\8\7\e\z\h\7\r\g\p\9\8\1\t\9\f\5\f\h\z\n\q\m\i\0\b\d\6\h\q\t\6\l\5\7\t\o\d\a\u\b\x\t\i\m\1\b\a\7\a\i\a\6\0\p\6\d\d\x\8\v\s\p\c\r\1\d\9\a\5\3\r\s\8\w\d\z\l\p\i\e\t\b\o\q\t\m\e\s\5\m\s\n\4\m\5\y\0\a\6\z\x\y\5\p\f\5\7\c\8\6\e\6\0\6\y\g\k\i\s\g\4\v\j\0\h\8\9\w\v\u\l\d\3\i\4\4\t\o\9\q\b\i\q\a\m\l\j\q\b\o\5\x\u\p\g\t\o\q\1\e\k\j\6\9\t\g\i\6\x\i\z\h\m\c\h\w\2\k\g\s\k\m\e\3\s\9\m\i\u\y\6\w\1\c\6\g\d\i\e\y\3\o\q\o\7\b\a\u\p\7\a\i\j\t\h\r\j\j\t\m\g\f\o\2\x\l\d\w\m\m\c\f\r\h\8\2\e\4\k\k\k\g\i\r\8\4\4\o\7\d\s\v\v\g\f\8\6\z\n\u\h\f\x\n\l\f\n\5\c\m\3\g\9\c\n\u\g\x\h\a\v\c\w\j\p\g\i\c\k\r\j\a\p\v\r\6\x\d\g\x\c\i\b\a\8\b\x\n\f\t\k\7\e\o\5\y\3\p\h\1\o\3\o\h\c\z\s\i\g\4\w\g\a\2\1\t\s\y\n\9\i\x\2\3\9\o\7\1\w\0\t\e\r\4\2\y\8\m\e\p\f\d\0\k\s\v\f\l\a\s\t\l\s\f\y\o\k\u\6\l\g\j\1\k\5\i\r\n\i\8\h\j\t\1\2\e\i\o\w\m\w\w\m\p\u\5\m\f\8\s\p\5\v\n\0\d\d\m\8\y\1\4\2\5\h\l\q\r\2\c\y\k\h\d\r\u\7\8\d\9\x\v\q\q\u\p\m\2\w\s\v\o\u\t\c\q\u\v\0\e\r\d\t\1\x\z\v\v\v\f\j\t\q\q\h\9\v\2\3\s\3\4\t\l\e\z\k\8\3\l\i\a\a\r\w\1\m\y\6\0\w\1\q\g\1\n\d\2\4\1\p\9\e\y\s\t\r\2\v\c\h\f\v\a\h\d\n\p\m\5\q\3\c\u\h\w\v\j\6\n\i\6\c\d\1\5\v\s\w\w\y\f\b\7\3\f\f\8\p\0\2\b\d\6\3\8\7\2\p\k\3\n\2\l\p\b\w\b\v\4\b\n\v\r\0\n\3\l\2\4\6\0\9\x\0\u\a\m\c\y\e\z\l\9\8\s\v\7\k\7\5\z\7\q\6\a\o\g\f\v\6\f\g\8\9\y\i\s\g\o\1\a\p\t\m\j\0\0\1\j\n\4\4\c\r\u\d\u\3\i\p\v\g\5\z\6\2\j\n\r\p\9\z\d\c\u\m\6\2\q\k\v\g\u\y\c\g\2\x\l\p\p\a\b\f\m\u\j\8\q\0\d\a\2\r\3\p\5\d\x\3\1\o\g\9\f\8\1\e\4\4\s\s\m\l\i\y\g\4\q\q\h\o\b\z\k\p\t\4\m\8\5\u\v\5\a\3\j\n\y\o\1\6\d\7\4\3\7\c\m\i\5\e\j\o\t\8\m\3\9\f\p\d\q\y\a\3\n\e\5\p\x\v\s\y\l\f\o\d\9\3\a\t\k\4\0\b\p\6\n\x\a\9\c\q\z\8\v\i\x\7\f\b\p\8\i\d\b\y\8\6\b\d\3\k\a\5\7\n\r\9\9\9\p\j\r\a\4\7\o\k\w\5\0\g\s\h\m\7\c\w\0\0\8\2\x\r\4\h\j\4\t\j\1\o\n\t\b\c\3\d\b\s\c\b\4\k\k\m\l\t\3\c\v\o\t\v\0\l\o\4\a\e\2\g\f\9\m\j\9\j\f\f\a\a\8\1\r\q\l\j\p\o\5\3\2\7\v\x\j\s\x\j\6\y\l\y\l\g\n\u\5\t\g\o\p\v\e\q\t\9\l\b\1\i\8\3\h\r\u\t\i\e\j\k\s\g\x\r\p\w\y\5\n\u\d\x\z\4\a\i\t\c\1\2\u\m\0\v\o\a\a\h\y\o\1\0\o\4\j\h\r\b\2\7\9\o\g\7\9\s\z\k\a\0\p\u\f\0\3\6\l\6\e\u\1\0\0\i\3\h\g\5\q\3\9\3\i\u\i\f\x\m\q\u\8\w\m\0\9\6\o\u\7\a\v\i\k\v\c\c\1\7\x\n\r\r\u\u\n\z\8\u\j\f\7\l\a\t\i\m\w\y\9\2\l\s\s\t\u\m\5\n\y\r\o\w\e\g\z\a\1\0\3\1\h\i\a\x\u\m\a\m\q\j\c\3\o\0\6\u\u\3\1\n\k\h\r\5\g\w\s\j\b\8\1\1\t\c\8\5\m\1\l\g\n\y\v\r\g\6\q\9\0\i\c\x\6\n\b\s\l\o\l\a\l\l\d\r\8\w\z\h\z\0\1\d\3\w\0\y\5\i\d\3\s\d\6\v\y\s\0\8\c\9\r\x\n\h\w\o\u\w\y\3\r\m\l\i\r\m\7\s\8\d\5\b\h\7\i\a\q\1\8\s\f\m\0\k\v\0\o\c\w\l\y\z\n\k\a\f\7\g\r\z\f\v\j\t\v\x\2\3\3\r\m\5\g\b\9\9\k\4\k\q\j\v\k\g\j\d\p\a\r\q\r\a\x\f\k\6\z\z\e\l\0\7\m\0\6\1\8\i\h\x\i\h\0\z\m\l\x\6\i\w\0\7\7\p\h\g\z\l\y\q\r\m\g\9\z\w\3\h\7\d\q\d\q\m\v\6\j\2\y\f\p\g\t\j\6\3\u\k\d\3\h\x\a\r\h\r\p\7\y\x\4\g\o\l\g\5\4\y\t\b\e\m\m\4\2\8\n\5\b\v\w\k\t\7\y\5\l\t\d\5\4\z\q\7\5\s\0\h\3\j\q\r\s\z\b\f\k\4\5\5\b\t\j\8\p\z\q\q\k\a\p\y\o\k\z\3\p\a\s\2\8\x\u\y\l\1\v\x\g\w\5\y\k\x\8\j\f\3\3\9\1\1\z\4\9\t\o\g\v\h\0\g\r\p\w\2\4\q\y\f\j\l\n\p\r\v\g\a\b\d\7\0\j\9\5\z\m\c\x\z\v\4\y\m\n\h\h\x\u\f\p\n\u\s\0\5\i\p\r\0\q\t\q\g\b\l\9\n\h\f\o\c\m\4\1\n\m\e\5\f\8\e\g\f\5\9\x\2\v\3\0\1\l\u\8\c\f\6\a\4\2\v\9\e\a\4\n\6\2\u\4\s\e\8\c\s\s\v\0\c\3\5\b\n\i\e\o\d\s\b\z\d\x\y\6\3\k\m\9\x\x\b\2\t\4\8\e\8\5\e\0\t\8\i\e\d\d\b\6\p\5\j\z\z\w\g\h\a\n\l\b\x\i\8\g\1\7\s\q\9\b\i\e\j\7\r\c\g\3\1\p\a\x\z\f\8\j\b\z\v\k\u\c\u\c\m\l\6\q\7\j\g\v\w\u\3\7\s\z\h\i\i\s\5\y\d\2\v\r\x\h\x\0\0\v\d\7\n\t\a\3\p\x\g\l\n\z\r\e\l\t\y\n\0\2\z\g\n\z\n\i\1\f\o\h\4\k\p\3\3\h\g\h\j\b\e\t\y\i\r\u\m\w\0\k\a\z\k\c\a\r\x\u\f\z\y\a\z\c\1\7\g\t\t\l\c\i\z\p\7\q\y\w\q\m\y\e\p\z\x\x\u\0\4\x\l\u\5\2\x\2\r\h\c\w\v\a\j\h\5\4\e\t\b\f\4\b\f\7\u\g\y\i\y\7\r\o\i\t\r\5\e\j\6\t\f\v\3\p\v\m\d\p\t\9\v\1\g\9\f\c\v\b\0\f\l\g\6\t\e\o\h\g\4\d\g\n\6\l\3\n\z\4\x\t\1\f\i\s\7\u\d\n\5\c\1\9\q\v\4\l\c\m\d\a\q\v\0\8\9\1\j\u\v\6\w\o\g\y\f\k\p\x\z\x\t\5\e\9\2\f\x\n\3\e\n\f\g\c\h\y\q\m\w\4\z\k\5\z\i\0\m\v\g\2\8\b\e\r\v\3\3\o\g\3\0\p\h\v\w\g\z\f\9\v\o\i\2\a\j\k\x\k\j\k\b\g\y\c\f\0\j\6\2\e\d\2\z\x\v\d\5\8\4\k\t\y\i\e\e\z\0\y\r\a\e\d\4\f\i\z\k\y\y\h\p\x\x\e\8\8\5\6\j\a\p\d\v\h\e\s\g\4\c\k\m\o\y\2\p\1\h\h\c\t\h\l\0\a\t\1\2\x\a\h\z\s\q\a\v\o\s\2\x\v\z\o\i\p\m\l\s\6\p\2\1\4\2\b\2\f\8\e\c\b\b\z\r\b\s\w\k\0\k\d\k\b\n\g\l\g\z\t\x\2\o\a\9\4\c\t\2\9\y\i\o\y\2\m\0\i\t\3\b\w\j\d\p\z\p\0\4\g\v\e\f\x\7\6\5\g\q\8\0\p\e\9\t\3\3\5\z\l\q\7\k\p\m\2\g\m\5\j\n\r\p\9\x\q\t\0\x\j\0\g\p\8\s\v\4\d\k\o\d\c\7\1\m\n\u\l\s\8\y\u\2\5\t\0\x\6\r\x\8\w\a\7\r\y\r\m\9\6\s\m\f\6\z\y\q\d\o\u\v\o\x\e\7\c\7\5\y\k\h\h\w\i\7\j\2\4\y\v\b\6\w ]] 00:26:56.731 00:26:56.731 real 0m1.591s 00:26:56.731 user 0m1.014s 00:26:56.731 sys 0m0.457s 00:26:56.731 ************************************ 00:26:56.731 END TEST dd_rw_offset 00:26:56.731 ************************************ 00:26:56.731 10:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.731 10:52:23 -- common/autotest_common.sh@10 -- # set +x 00:26:56.998 10:52:23 -- dd/basic_rw.sh@1 -- # cleanup 00:26:56.998 10:52:23 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:26:56.998 10:52:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:56.998 10:52:23 -- dd/common.sh@11 -- # local nvme_ref= 00:26:56.998 10:52:23 -- dd/common.sh@12 -- # local size=0xffff 00:26:56.998 10:52:23 -- dd/common.sh@14 -- # local bs=1048576 00:26:56.998 10:52:23 -- dd/common.sh@15 -- # local count=1 00:26:56.998 10:52:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:56.998 10:52:23 -- dd/common.sh@18 -- # gen_conf 00:26:56.998 10:52:23 -- dd/common.sh@31 -- # xtrace_disable 00:26:56.998 10:52:23 -- common/autotest_common.sh@10 -- # set +x 00:26:56.998 [2024-07-24 10:52:23.475686] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:56.998 [2024-07-24 10:52:23.475910] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144910 ] 00:26:56.998 { 00:26:56.998 "subsystems": [ 00:26:56.998 { 00:26:56.998 "subsystem": "bdev", 00:26:56.998 "config": [ 00:26:56.998 { 00:26:56.998 "params": { 00:26:56.998 "trtype": "pcie", 00:26:56.998 "traddr": "0000:00:06.0", 00:26:56.998 "name": "Nvme0" 00:26:56.998 }, 00:26:56.998 "method": "bdev_nvme_attach_controller" 00:26:56.998 }, 00:26:56.998 { 00:26:56.998 "method": "bdev_wait_for_examine" 00:26:56.998 } 00:26:56.998 ] 00:26:56.998 } 00:26:56.998 ] 00:26:56.998 } 00:26:56.998 [2024-07-24 10:52:23.621753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.258 [2024-07-24 10:52:23.712775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.517  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:57.517 00:26:57.517 10:52:24 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:57.517 00:26:57.517 real 0m20.071s 00:26:57.517 user 0m13.426s 00:26:57.517 sys 0m4.925s 00:26:57.517 10:52:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.517 10:52:24 -- common/autotest_common.sh@10 -- # set +x 00:26:57.517 ************************************ 00:26:57.517 END TEST spdk_dd_basic_rw 00:26:57.517 ************************************ 00:26:57.776 10:52:24 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:57.776 10:52:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:57.776 10:52:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:57.776 10:52:24 -- common/autotest_common.sh@10 -- # set +x 00:26:57.776 ************************************ 00:26:57.776 START TEST spdk_dd_posix 00:26:57.776 ************************************ 00:26:57.776 10:52:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:26:57.776 * Looking for test storage... 00:26:57.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:57.776 10:52:24 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:57.776 10:52:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.777 10:52:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.777 10:52:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.777 10:52:24 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.777 10:52:24 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.777 10:52:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.777 10:52:24 -- paths/export.sh@5 -- # export PATH 00:26:57.777 10:52:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:57.777 10:52:24 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:26:57.777 10:52:24 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:26:57.777 10:52:24 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:26:57.777 10:52:24 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:26:57.777 10:52:24 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:57.777 10:52:24 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:57.777 10:52:24 -- dd/posix.sh@130 -- # tests 00:26:57.777 10:52:24 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:26:57.777 * First test run, using AIO 00:26:57.777 10:52:24 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:26:57.777 10:52:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:57.777 10:52:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:57.777 10:52:24 -- common/autotest_common.sh@10 -- # set +x 00:26:57.777 ************************************ 00:26:57.777 START TEST dd_flag_append 00:26:57.777 ************************************ 00:26:57.777 10:52:24 -- common/autotest_common.sh@1104 -- # append 00:26:57.777 10:52:24 -- dd/posix.sh@16 -- # local dump0 00:26:57.777 10:52:24 -- dd/posix.sh@17 -- # local dump1 00:26:57.777 10:52:24 -- dd/posix.sh@19 -- # gen_bytes 32 00:26:57.777 10:52:24 -- dd/common.sh@98 -- # xtrace_disable 00:26:57.777 10:52:24 -- common/autotest_common.sh@10 -- # set +x 00:26:57.777 10:52:24 -- dd/posix.sh@19 -- # dump0=hxyhd8x0t65bruvnzfianq606wozvkre 00:26:57.777 10:52:24 -- dd/posix.sh@20 -- # gen_bytes 32 00:26:57.777 10:52:24 -- dd/common.sh@98 -- # xtrace_disable 00:26:57.777 10:52:24 -- common/autotest_common.sh@10 -- # set +x 00:26:57.777 10:52:24 -- dd/posix.sh@20 -- # dump1=4sec8tm522wpd6rxxe3een84bp419sv1 00:26:57.777 10:52:24 -- dd/posix.sh@22 -- # printf %s hxyhd8x0t65bruvnzfianq606wozvkre 00:26:57.777 10:52:24 -- dd/posix.sh@23 -- # printf %s 4sec8tm522wpd6rxxe3een84bp419sv1 00:26:57.777 10:52:24 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:26:57.777 [2024-07-24 10:52:24.383036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:57.777 [2024-07-24 10:52:24.383267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144979 ] 00:26:58.035 [2024-07-24 10:52:24.526184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.035 [2024-07-24 10:52:24.608955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.603  Copying: 32/32 [B] (average 31 kBps) 00:26:58.603 00:26:58.603 10:52:24 -- dd/posix.sh@27 -- # [[ 4sec8tm522wpd6rxxe3een84bp419sv1hxyhd8x0t65bruvnzfianq606wozvkre == \4\s\e\c\8\t\m\5\2\2\w\p\d\6\r\x\x\e\3\e\e\n\8\4\b\p\4\1\9\s\v\1\h\x\y\h\d\8\x\0\t\6\5\b\r\u\v\n\z\f\i\a\n\q\6\0\6\w\o\z\v\k\r\e ]] 00:26:58.603 00:26:58.603 real 0m0.672s 00:26:58.603 user 0m0.337s 00:26:58.603 sys 0m0.188s 00:26:58.603 10:52:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.603 ************************************ 00:26:58.603 END TEST dd_flag_append 00:26:58.603 ************************************ 00:26:58.603 10:52:24 -- common/autotest_common.sh@10 -- # set +x 00:26:58.603 10:52:25 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:26:58.603 10:52:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:58.603 10:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:58.603 10:52:25 -- common/autotest_common.sh@10 -- # set +x 00:26:58.603 ************************************ 00:26:58.603 START TEST dd_flag_directory 00:26:58.603 ************************************ 00:26:58.603 10:52:25 -- common/autotest_common.sh@1104 -- # directory 00:26:58.603 10:52:25 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:58.603 10:52:25 -- common/autotest_common.sh@640 -- # local es=0 00:26:58.603 10:52:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:58.603 10:52:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:58.603 10:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:58.603 10:52:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:58.603 10:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:58.603 10:52:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:58.603 10:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:58.603 10:52:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:58.603 10:52:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:58.603 10:52:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:58.603 [2024-07-24 10:52:25.106623] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:58.603 [2024-07-24 10:52:25.106852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145021 ] 00:26:58.603 [2024-07-24 10:52:25.252037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.862 [2024-07-24 10:52:25.334195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.862 [2024-07-24 10:52:25.416137] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:58.862 [2024-07-24 10:52:25.416221] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:58.862 [2024-07-24 10:52:25.416256] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:58.862 [2024-07-24 10:52:25.534523] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:59.120 10:52:25 -- common/autotest_common.sh@643 -- # es=236 00:26:59.120 10:52:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:59.120 10:52:25 -- common/autotest_common.sh@652 -- # es=108 00:26:59.120 10:52:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:59.120 10:52:25 -- common/autotest_common.sh@660 -- # es=1 00:26:59.120 10:52:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:59.120 10:52:25 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:59.120 10:52:25 -- common/autotest_common.sh@640 -- # local es=0 00:26:59.120 10:52:25 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:59.120 10:52:25 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.120 10:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:59.120 10:52:25 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.120 10:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:59.120 10:52:25 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.120 10:52:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:59.120 10:52:25 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.120 10:52:25 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:59.120 10:52:25 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:59.120 [2024-07-24 10:52:25.720946] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:59.120 [2024-07-24 10:52:25.721180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145036 ] 00:26:59.378 [2024-07-24 10:52:25.867526] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.378 [2024-07-24 10:52:25.937565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.378 [2024-07-24 10:52:26.018908] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:59.378 [2024-07-24 10:52:26.018997] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:59.378 [2024-07-24 10:52:26.019039] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:59.637 [2024-07-24 10:52:26.140878] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:59.637 10:52:26 -- common/autotest_common.sh@643 -- # es=236 00:26:59.637 10:52:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:59.637 10:52:26 -- common/autotest_common.sh@652 -- # es=108 00:26:59.637 10:52:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:59.637 10:52:26 -- common/autotest_common.sh@660 -- # es=1 00:26:59.637 10:52:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:59.637 00:26:59.637 real 0m1.214s 00:26:59.637 user 0m0.609s 00:26:59.637 sys 0m0.406s 00:26:59.637 10:52:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.637 10:52:26 -- common/autotest_common.sh@10 -- # set +x 00:26:59.637 ************************************ 00:26:59.637 END TEST dd_flag_directory 00:26:59.637 ************************************ 00:26:59.637 10:52:26 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:26:59.637 10:52:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:59.637 10:52:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:59.637 10:52:26 -- common/autotest_common.sh@10 -- # set +x 00:26:59.637 ************************************ 00:26:59.637 START TEST dd_flag_nofollow 00:26:59.637 ************************************ 00:26:59.637 10:52:26 -- common/autotest_common.sh@1104 -- # nofollow 00:26:59.637 10:52:26 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:59.637 10:52:26 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:59.637 10:52:26 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:59.637 10:52:26 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:59.637 10:52:26 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:59.896 10:52:26 -- common/autotest_common.sh@640 -- # local es=0 00:26:59.896 10:52:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:59.896 10:52:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.896 10:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:59.896 10:52:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.896 10:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:59.896 10:52:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.896 10:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:59.896 10:52:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.896 10:52:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:59.896 10:52:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:59.896 [2024-07-24 10:52:26.382581] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:59.896 [2024-07-24 10:52:26.382805] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145068 ] 00:26:59.896 [2024-07-24 10:52:26.529182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.155 [2024-07-24 10:52:26.607117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.155 [2024-07-24 10:52:26.691091] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:00.156 [2024-07-24 10:52:26.691182] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:00.156 [2024-07-24 10:52:26.691224] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:00.156 [2024-07-24 10:52:26.809529] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:00.414 10:52:26 -- common/autotest_common.sh@643 -- # es=216 00:27:00.415 10:52:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:00.415 10:52:26 -- common/autotest_common.sh@652 -- # es=88 00:27:00.415 10:52:26 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:00.415 10:52:26 -- common/autotest_common.sh@660 -- # es=1 00:27:00.415 10:52:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:00.415 10:52:26 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:00.415 10:52:26 -- common/autotest_common.sh@640 -- # local es=0 00:27:00.415 10:52:26 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:00.415 10:52:26 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.415 10:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:00.415 10:52:26 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.415 10:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:00.415 10:52:26 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.415 10:52:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:00.415 10:52:26 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.415 10:52:26 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:00.415 10:52:26 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:00.415 [2024-07-24 10:52:26.985874] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:00.415 [2024-07-24 10:52:26.986112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145083 ] 00:27:00.674 [2024-07-24 10:52:27.134535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.674 [2024-07-24 10:52:27.219812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.674 [2024-07-24 10:52:27.302803] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:00.674 [2024-07-24 10:52:27.302900] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:00.674 [2024-07-24 10:52:27.302939] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:00.932 [2024-07-24 10:52:27.426657] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:00.932 10:52:27 -- common/autotest_common.sh@643 -- # es=216 00:27:00.932 10:52:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:00.932 10:52:27 -- common/autotest_common.sh@652 -- # es=88 00:27:00.932 10:52:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:00.932 10:52:27 -- common/autotest_common.sh@660 -- # es=1 00:27:00.932 10:52:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:00.932 10:52:27 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:00.932 10:52:27 -- dd/common.sh@98 -- # xtrace_disable 00:27:00.932 10:52:27 -- common/autotest_common.sh@10 -- # set +x 00:27:00.932 10:52:27 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:00.932 [2024-07-24 10:52:27.612724] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:00.932 [2024-07-24 10:52:27.613009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145097 ] 00:27:01.191 [2024-07-24 10:52:27.759233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.191 [2024-07-24 10:52:27.838998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.707  Copying: 512/512 [B] (average 500 kBps) 00:27:01.707 00:27:01.707 10:52:28 -- dd/posix.sh@49 -- # [[ 5eynq10nc3sozw5ow2n6uax6o3bp4u39mkp5e4i6kyjk9k16nd8t8vro0rrbdcqnmwji5xy0ladtqjeskljiy03z37pigz1x7wg1ekp0z20x0uzlqc3n25552cfmlrq9khk80naxftlzkoxw1xqewco0o0zn1z5iy8jwicvfhpop6lfqa0gan3o6x41vbpvmgnqxkd1j73d8uo2oyu6rtw7xl05n1ywzz7t7nq4tk00zlvit9sn83n4twiws1yxxj2qhqar9547vh5act9goq3zr9qa35kzu8jxulazwwwj8t42sd7numi65otdnaend64rr5pz047d8if6v701wd56ebr9j7v7em39bb7f5pqwxkhg3cz6jnxikkj0gvakmgydq0eh7tc2xayfbdzruz8cqf5x5yks3w1ota05eb5rztihu860fqzt80i8x1872inhztum2i8om4hj1lwttko5y8vgp17owk2l6u324liisgd0302crqz93fp2hy4p2 == \5\e\y\n\q\1\0\n\c\3\s\o\z\w\5\o\w\2\n\6\u\a\x\6\o\3\b\p\4\u\3\9\m\k\p\5\e\4\i\6\k\y\j\k\9\k\1\6\n\d\8\t\8\v\r\o\0\r\r\b\d\c\q\n\m\w\j\i\5\x\y\0\l\a\d\t\q\j\e\s\k\l\j\i\y\0\3\z\3\7\p\i\g\z\1\x\7\w\g\1\e\k\p\0\z\2\0\x\0\u\z\l\q\c\3\n\2\5\5\5\2\c\f\m\l\r\q\9\k\h\k\8\0\n\a\x\f\t\l\z\k\o\x\w\1\x\q\e\w\c\o\0\o\0\z\n\1\z\5\i\y\8\j\w\i\c\v\f\h\p\o\p\6\l\f\q\a\0\g\a\n\3\o\6\x\4\1\v\b\p\v\m\g\n\q\x\k\d\1\j\7\3\d\8\u\o\2\o\y\u\6\r\t\w\7\x\l\0\5\n\1\y\w\z\z\7\t\7\n\q\4\t\k\0\0\z\l\v\i\t\9\s\n\8\3\n\4\t\w\i\w\s\1\y\x\x\j\2\q\h\q\a\r\9\5\4\7\v\h\5\a\c\t\9\g\o\q\3\z\r\9\q\a\3\5\k\z\u\8\j\x\u\l\a\z\w\w\w\j\8\t\4\2\s\d\7\n\u\m\i\6\5\o\t\d\n\a\e\n\d\6\4\r\r\5\p\z\0\4\7\d\8\i\f\6\v\7\0\1\w\d\5\6\e\b\r\9\j\7\v\7\e\m\3\9\b\b\7\f\5\p\q\w\x\k\h\g\3\c\z\6\j\n\x\i\k\k\j\0\g\v\a\k\m\g\y\d\q\0\e\h\7\t\c\2\x\a\y\f\b\d\z\r\u\z\8\c\q\f\5\x\5\y\k\s\3\w\1\o\t\a\0\5\e\b\5\r\z\t\i\h\u\8\6\0\f\q\z\t\8\0\i\8\x\1\8\7\2\i\n\h\z\t\u\m\2\i\8\o\m\4\h\j\1\l\w\t\t\k\o\5\y\8\v\g\p\1\7\o\w\k\2\l\6\u\3\2\4\l\i\i\s\g\d\0\3\0\2\c\r\q\z\9\3\f\p\2\h\y\4\p\2 ]] 00:27:01.707 00:27:01.707 real 0m1.911s 00:27:01.707 user 0m0.999s 00:27:01.707 sys 0m0.576s 00:27:01.707 10:52:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.707 10:52:28 -- common/autotest_common.sh@10 -- # set +x 00:27:01.707 ************************************ 00:27:01.707 END TEST dd_flag_nofollow 00:27:01.707 ************************************ 00:27:01.707 10:52:28 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:01.707 10:52:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:01.707 10:52:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:01.707 10:52:28 -- common/autotest_common.sh@10 -- # set +x 00:27:01.707 ************************************ 00:27:01.707 START TEST dd_flag_noatime 00:27:01.707 ************************************ 00:27:01.707 10:52:28 -- common/autotest_common.sh@1104 -- # noatime 00:27:01.708 10:52:28 -- dd/posix.sh@53 -- # local atime_if 00:27:01.708 10:52:28 -- dd/posix.sh@54 -- # local atime_of 00:27:01.708 10:52:28 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:01.708 10:52:28 -- dd/common.sh@98 -- # xtrace_disable 00:27:01.708 10:52:28 -- common/autotest_common.sh@10 -- # set +x 00:27:01.708 10:52:28 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:01.708 10:52:28 -- dd/posix.sh@60 -- # atime_if=1721818347 00:27:01.708 10:52:28 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:01.708 10:52:28 -- dd/posix.sh@61 -- # atime_of=1721818348 00:27:01.708 10:52:28 -- dd/posix.sh@66 -- # sleep 1 00:27:02.641 10:52:29 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:02.907 [2024-07-24 10:52:29.351356] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:02.907 [2024-07-24 10:52:29.351584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145145 ] 00:27:02.907 [2024-07-24 10:52:29.492438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.907 [2024-07-24 10:52:29.580690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.423  Copying: 512/512 [B] (average 500 kBps) 00:27:03.423 00:27:03.423 10:52:29 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:03.423 10:52:29 -- dd/posix.sh@69 -- # (( atime_if == 1721818347 )) 00:27:03.423 10:52:29 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:03.423 10:52:29 -- dd/posix.sh@70 -- # (( atime_of == 1721818348 )) 00:27:03.423 10:52:29 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:03.423 [2024-07-24 10:52:30.006256] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:03.423 [2024-07-24 10:52:30.006561] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145163 ] 00:27:03.681 [2024-07-24 10:52:30.158147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.681 [2024-07-24 10:52:30.241545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.249  Copying: 512/512 [B] (average 500 kBps) 00:27:04.249 00:27:04.249 10:52:30 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:04.249 10:52:30 -- dd/posix.sh@73 -- # (( atime_if < 1721818350 )) 00:27:04.249 00:27:04.249 real 0m2.364s 00:27:04.249 user 0m0.719s 00:27:04.249 sys 0m0.373s 00:27:04.249 10:52:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.249 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:27:04.249 ************************************ 00:27:04.249 END TEST dd_flag_noatime 00:27:04.249 ************************************ 00:27:04.249 10:52:30 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:27:04.249 10:52:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:04.249 10:52:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.249 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:27:04.249 ************************************ 00:27:04.249 START TEST dd_flags_misc 00:27:04.249 ************************************ 00:27:04.249 10:52:30 -- common/autotest_common.sh@1104 -- # io 00:27:04.249 10:52:30 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:04.249 10:52:30 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:04.249 10:52:30 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:04.249 10:52:30 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:04.249 10:52:30 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:04.249 10:52:30 -- dd/common.sh@98 -- # xtrace_disable 00:27:04.249 10:52:30 -- common/autotest_common.sh@10 -- # set +x 00:27:04.249 10:52:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:04.249 10:52:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:04.249 [2024-07-24 10:52:30.768371] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:04.249 [2024-07-24 10:52:30.768659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145195 ] 00:27:04.249 [2024-07-24 10:52:30.917881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.507 [2024-07-24 10:52:31.007666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.765  Copying: 512/512 [B] (average 500 kBps) 00:27:04.765 00:27:04.766 10:52:31 -- dd/posix.sh@93 -- # [[ 55zi9vqhn0jigknec8raenm4e7ul8431bosvzcrsgktkituxy2xz48f9ck6wg4kbu4fwstb3p34ipxy5wgkr4ydetijdnc5uubnm07n1yodb4vpjy0dt5ldbkz4i3xy7sxgon8lyl8r6neeyrmadwlr91mj4ckb4nacyy5oi8ovbxoxxpcu5e8jopc6962qn4k664e72u76zqqc1e0l7jdt5dyx9fodnid2ptporl4fp9pmkiv5f3rk0gzxx3lh30i98xo25l24ko3ac4ykhgaibsctqiwvvhtmlzn8n1nr5843hv2gmpyg3i1wjrhwclhlv9b9wwhqjr4ftrod3nhuq8a72pv9tv94st57gmaig72bfu7rz59rl3od8frnfvzbt2mph091v1ugd4tugpv3ogmgvbw9stc3swlbj5tinvmpnxa2lpjygwyqiwba9ek7fs176d8267yh9g95ebr233occoiy3mngp8nuyk54ohoqcqcdabdqku47uqog1 == \5\5\z\i\9\v\q\h\n\0\j\i\g\k\n\e\c\8\r\a\e\n\m\4\e\7\u\l\8\4\3\1\b\o\s\v\z\c\r\s\g\k\t\k\i\t\u\x\y\2\x\z\4\8\f\9\c\k\6\w\g\4\k\b\u\4\f\w\s\t\b\3\p\3\4\i\p\x\y\5\w\g\k\r\4\y\d\e\t\i\j\d\n\c\5\u\u\b\n\m\0\7\n\1\y\o\d\b\4\v\p\j\y\0\d\t\5\l\d\b\k\z\4\i\3\x\y\7\s\x\g\o\n\8\l\y\l\8\r\6\n\e\e\y\r\m\a\d\w\l\r\9\1\m\j\4\c\k\b\4\n\a\c\y\y\5\o\i\8\o\v\b\x\o\x\x\p\c\u\5\e\8\j\o\p\c\6\9\6\2\q\n\4\k\6\6\4\e\7\2\u\7\6\z\q\q\c\1\e\0\l\7\j\d\t\5\d\y\x\9\f\o\d\n\i\d\2\p\t\p\o\r\l\4\f\p\9\p\m\k\i\v\5\f\3\r\k\0\g\z\x\x\3\l\h\3\0\i\9\8\x\o\2\5\l\2\4\k\o\3\a\c\4\y\k\h\g\a\i\b\s\c\t\q\i\w\v\v\h\t\m\l\z\n\8\n\1\n\r\5\8\4\3\h\v\2\g\m\p\y\g\3\i\1\w\j\r\h\w\c\l\h\l\v\9\b\9\w\w\h\q\j\r\4\f\t\r\o\d\3\n\h\u\q\8\a\7\2\p\v\9\t\v\9\4\s\t\5\7\g\m\a\i\g\7\2\b\f\u\7\r\z\5\9\r\l\3\o\d\8\f\r\n\f\v\z\b\t\2\m\p\h\0\9\1\v\1\u\g\d\4\t\u\g\p\v\3\o\g\m\g\v\b\w\9\s\t\c\3\s\w\l\b\j\5\t\i\n\v\m\p\n\x\a\2\l\p\j\y\g\w\y\q\i\w\b\a\9\e\k\7\f\s\1\7\6\d\8\2\6\7\y\h\9\g\9\5\e\b\r\2\3\3\o\c\c\o\i\y\3\m\n\g\p\8\n\u\y\k\5\4\o\h\o\q\c\q\c\d\a\b\d\q\k\u\4\7\u\q\o\g\1 ]] 00:27:04.766 10:52:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:04.766 10:52:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:05.024 [2024-07-24 10:52:31.454067] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:05.024 [2024-07-24 10:52:31.454548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145206 ] 00:27:05.024 [2024-07-24 10:52:31.603564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.024 [2024-07-24 10:52:31.681794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.541  Copying: 512/512 [B] (average 500 kBps) 00:27:05.541 00:27:05.541 10:52:32 -- dd/posix.sh@93 -- # [[ 55zi9vqhn0jigknec8raenm4e7ul8431bosvzcrsgktkituxy2xz48f9ck6wg4kbu4fwstb3p34ipxy5wgkr4ydetijdnc5uubnm07n1yodb4vpjy0dt5ldbkz4i3xy7sxgon8lyl8r6neeyrmadwlr91mj4ckb4nacyy5oi8ovbxoxxpcu5e8jopc6962qn4k664e72u76zqqc1e0l7jdt5dyx9fodnid2ptporl4fp9pmkiv5f3rk0gzxx3lh30i98xo25l24ko3ac4ykhgaibsctqiwvvhtmlzn8n1nr5843hv2gmpyg3i1wjrhwclhlv9b9wwhqjr4ftrod3nhuq8a72pv9tv94st57gmaig72bfu7rz59rl3od8frnfvzbt2mph091v1ugd4tugpv3ogmgvbw9stc3swlbj5tinvmpnxa2lpjygwyqiwba9ek7fs176d8267yh9g95ebr233occoiy3mngp8nuyk54ohoqcqcdabdqku47uqog1 == \5\5\z\i\9\v\q\h\n\0\j\i\g\k\n\e\c\8\r\a\e\n\m\4\e\7\u\l\8\4\3\1\b\o\s\v\z\c\r\s\g\k\t\k\i\t\u\x\y\2\x\z\4\8\f\9\c\k\6\w\g\4\k\b\u\4\f\w\s\t\b\3\p\3\4\i\p\x\y\5\w\g\k\r\4\y\d\e\t\i\j\d\n\c\5\u\u\b\n\m\0\7\n\1\y\o\d\b\4\v\p\j\y\0\d\t\5\l\d\b\k\z\4\i\3\x\y\7\s\x\g\o\n\8\l\y\l\8\r\6\n\e\e\y\r\m\a\d\w\l\r\9\1\m\j\4\c\k\b\4\n\a\c\y\y\5\o\i\8\o\v\b\x\o\x\x\p\c\u\5\e\8\j\o\p\c\6\9\6\2\q\n\4\k\6\6\4\e\7\2\u\7\6\z\q\q\c\1\e\0\l\7\j\d\t\5\d\y\x\9\f\o\d\n\i\d\2\p\t\p\o\r\l\4\f\p\9\p\m\k\i\v\5\f\3\r\k\0\g\z\x\x\3\l\h\3\0\i\9\8\x\o\2\5\l\2\4\k\o\3\a\c\4\y\k\h\g\a\i\b\s\c\t\q\i\w\v\v\h\t\m\l\z\n\8\n\1\n\r\5\8\4\3\h\v\2\g\m\p\y\g\3\i\1\w\j\r\h\w\c\l\h\l\v\9\b\9\w\w\h\q\j\r\4\f\t\r\o\d\3\n\h\u\q\8\a\7\2\p\v\9\t\v\9\4\s\t\5\7\g\m\a\i\g\7\2\b\f\u\7\r\z\5\9\r\l\3\o\d\8\f\r\n\f\v\z\b\t\2\m\p\h\0\9\1\v\1\u\g\d\4\t\u\g\p\v\3\o\g\m\g\v\b\w\9\s\t\c\3\s\w\l\b\j\5\t\i\n\v\m\p\n\x\a\2\l\p\j\y\g\w\y\q\i\w\b\a\9\e\k\7\f\s\1\7\6\d\8\2\6\7\y\h\9\g\9\5\e\b\r\2\3\3\o\c\c\o\i\y\3\m\n\g\p\8\n\u\y\k\5\4\o\h\o\q\c\q\c\d\a\b\d\q\k\u\4\7\u\q\o\g\1 ]] 00:27:05.541 10:52:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:05.541 10:52:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:05.541 [2024-07-24 10:52:32.117481] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:05.541 [2024-07-24 10:52:32.117786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145221 ] 00:27:05.800 [2024-07-24 10:52:32.262257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.800 [2024-07-24 10:52:32.329009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.058  Copying: 512/512 [B] (average 125 kBps) 00:27:06.058 00:27:06.058 10:52:32 -- dd/posix.sh@93 -- # [[ 55zi9vqhn0jigknec8raenm4e7ul8431bosvzcrsgktkituxy2xz48f9ck6wg4kbu4fwstb3p34ipxy5wgkr4ydetijdnc5uubnm07n1yodb4vpjy0dt5ldbkz4i3xy7sxgon8lyl8r6neeyrmadwlr91mj4ckb4nacyy5oi8ovbxoxxpcu5e8jopc6962qn4k664e72u76zqqc1e0l7jdt5dyx9fodnid2ptporl4fp9pmkiv5f3rk0gzxx3lh30i98xo25l24ko3ac4ykhgaibsctqiwvvhtmlzn8n1nr5843hv2gmpyg3i1wjrhwclhlv9b9wwhqjr4ftrod3nhuq8a72pv9tv94st57gmaig72bfu7rz59rl3od8frnfvzbt2mph091v1ugd4tugpv3ogmgvbw9stc3swlbj5tinvmpnxa2lpjygwyqiwba9ek7fs176d8267yh9g95ebr233occoiy3mngp8nuyk54ohoqcqcdabdqku47uqog1 == \5\5\z\i\9\v\q\h\n\0\j\i\g\k\n\e\c\8\r\a\e\n\m\4\e\7\u\l\8\4\3\1\b\o\s\v\z\c\r\s\g\k\t\k\i\t\u\x\y\2\x\z\4\8\f\9\c\k\6\w\g\4\k\b\u\4\f\w\s\t\b\3\p\3\4\i\p\x\y\5\w\g\k\r\4\y\d\e\t\i\j\d\n\c\5\u\u\b\n\m\0\7\n\1\y\o\d\b\4\v\p\j\y\0\d\t\5\l\d\b\k\z\4\i\3\x\y\7\s\x\g\o\n\8\l\y\l\8\r\6\n\e\e\y\r\m\a\d\w\l\r\9\1\m\j\4\c\k\b\4\n\a\c\y\y\5\o\i\8\o\v\b\x\o\x\x\p\c\u\5\e\8\j\o\p\c\6\9\6\2\q\n\4\k\6\6\4\e\7\2\u\7\6\z\q\q\c\1\e\0\l\7\j\d\t\5\d\y\x\9\f\o\d\n\i\d\2\p\t\p\o\r\l\4\f\p\9\p\m\k\i\v\5\f\3\r\k\0\g\z\x\x\3\l\h\3\0\i\9\8\x\o\2\5\l\2\4\k\o\3\a\c\4\y\k\h\g\a\i\b\s\c\t\q\i\w\v\v\h\t\m\l\z\n\8\n\1\n\r\5\8\4\3\h\v\2\g\m\p\y\g\3\i\1\w\j\r\h\w\c\l\h\l\v\9\b\9\w\w\h\q\j\r\4\f\t\r\o\d\3\n\h\u\q\8\a\7\2\p\v\9\t\v\9\4\s\t\5\7\g\m\a\i\g\7\2\b\f\u\7\r\z\5\9\r\l\3\o\d\8\f\r\n\f\v\z\b\t\2\m\p\h\0\9\1\v\1\u\g\d\4\t\u\g\p\v\3\o\g\m\g\v\b\w\9\s\t\c\3\s\w\l\b\j\5\t\i\n\v\m\p\n\x\a\2\l\p\j\y\g\w\y\q\i\w\b\a\9\e\k\7\f\s\1\7\6\d\8\2\6\7\y\h\9\g\9\5\e\b\r\2\3\3\o\c\c\o\i\y\3\m\n\g\p\8\n\u\y\k\5\4\o\h\o\q\c\q\c\d\a\b\d\q\k\u\4\7\u\q\o\g\1 ]] 00:27:06.058 10:52:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:06.058 10:52:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:06.317 [2024-07-24 10:52:32.790394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:06.317 [2024-07-24 10:52:32.790644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145238 ] 00:27:06.317 [2024-07-24 10:52:32.939065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.575 [2024-07-24 10:52:33.009258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.834  Copying: 512/512 [B] (average 125 kBps) 00:27:06.834 00:27:06.834 10:52:33 -- dd/posix.sh@93 -- # [[ 55zi9vqhn0jigknec8raenm4e7ul8431bosvzcrsgktkituxy2xz48f9ck6wg4kbu4fwstb3p34ipxy5wgkr4ydetijdnc5uubnm07n1yodb4vpjy0dt5ldbkz4i3xy7sxgon8lyl8r6neeyrmadwlr91mj4ckb4nacyy5oi8ovbxoxxpcu5e8jopc6962qn4k664e72u76zqqc1e0l7jdt5dyx9fodnid2ptporl4fp9pmkiv5f3rk0gzxx3lh30i98xo25l24ko3ac4ykhgaibsctqiwvvhtmlzn8n1nr5843hv2gmpyg3i1wjrhwclhlv9b9wwhqjr4ftrod3nhuq8a72pv9tv94st57gmaig72bfu7rz59rl3od8frnfvzbt2mph091v1ugd4tugpv3ogmgvbw9stc3swlbj5tinvmpnxa2lpjygwyqiwba9ek7fs176d8267yh9g95ebr233occoiy3mngp8nuyk54ohoqcqcdabdqku47uqog1 == \5\5\z\i\9\v\q\h\n\0\j\i\g\k\n\e\c\8\r\a\e\n\m\4\e\7\u\l\8\4\3\1\b\o\s\v\z\c\r\s\g\k\t\k\i\t\u\x\y\2\x\z\4\8\f\9\c\k\6\w\g\4\k\b\u\4\f\w\s\t\b\3\p\3\4\i\p\x\y\5\w\g\k\r\4\y\d\e\t\i\j\d\n\c\5\u\u\b\n\m\0\7\n\1\y\o\d\b\4\v\p\j\y\0\d\t\5\l\d\b\k\z\4\i\3\x\y\7\s\x\g\o\n\8\l\y\l\8\r\6\n\e\e\y\r\m\a\d\w\l\r\9\1\m\j\4\c\k\b\4\n\a\c\y\y\5\o\i\8\o\v\b\x\o\x\x\p\c\u\5\e\8\j\o\p\c\6\9\6\2\q\n\4\k\6\6\4\e\7\2\u\7\6\z\q\q\c\1\e\0\l\7\j\d\t\5\d\y\x\9\f\o\d\n\i\d\2\p\t\p\o\r\l\4\f\p\9\p\m\k\i\v\5\f\3\r\k\0\g\z\x\x\3\l\h\3\0\i\9\8\x\o\2\5\l\2\4\k\o\3\a\c\4\y\k\h\g\a\i\b\s\c\t\q\i\w\v\v\h\t\m\l\z\n\8\n\1\n\r\5\8\4\3\h\v\2\g\m\p\y\g\3\i\1\w\j\r\h\w\c\l\h\l\v\9\b\9\w\w\h\q\j\r\4\f\t\r\o\d\3\n\h\u\q\8\a\7\2\p\v\9\t\v\9\4\s\t\5\7\g\m\a\i\g\7\2\b\f\u\7\r\z\5\9\r\l\3\o\d\8\f\r\n\f\v\z\b\t\2\m\p\h\0\9\1\v\1\u\g\d\4\t\u\g\p\v\3\o\g\m\g\v\b\w\9\s\t\c\3\s\w\l\b\j\5\t\i\n\v\m\p\n\x\a\2\l\p\j\y\g\w\y\q\i\w\b\a\9\e\k\7\f\s\1\7\6\d\8\2\6\7\y\h\9\g\9\5\e\b\r\2\3\3\o\c\c\o\i\y\3\m\n\g\p\8\n\u\y\k\5\4\o\h\o\q\c\q\c\d\a\b\d\q\k\u\4\7\u\q\o\g\1 ]] 00:27:06.834 10:52:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:06.834 10:52:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:06.834 10:52:33 -- dd/common.sh@98 -- # xtrace_disable 00:27:06.834 10:52:33 -- common/autotest_common.sh@10 -- # set +x 00:27:06.834 10:52:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:06.834 10:52:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:06.834 [2024-07-24 10:52:33.460379] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:06.834 [2024-07-24 10:52:33.460667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145243 ] 00:27:07.093 [2024-07-24 10:52:33.608089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.093 [2024-07-24 10:52:33.697190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.609  Copying: 512/512 [B] (average 500 kBps) 00:27:07.609 00:27:07.609 10:52:34 -- dd/posix.sh@93 -- # [[ qf3ct40p69z660b697l3quvqwjdwxzp6so5apgpkzom9dvrx9zr74uggg5oaw2xgcfqd5w75z5cib8t5woix3upp1bp3yclgvbbo86kyz0nnifmqxxr90qfepb7rlaxpusibi17d34syix99gh7pqdxm9kecqrkvxzcprs1a3gw9o31h78fbtikvwecpsehgw08496d7etlmvd6mra572c5upincizfskbdd15yka8vbu5of8lqzis0plim36f79fug6iikwvwmsup6wml3d7om3nn5hlx0mv5leq1a2n7cdcnc23dniu9cjm0zm0hqab0epzn4x9trn69zyxan5w2tf1zll9tkbwmo3old2vrmp9lixl73r3wtuo1hqf9x13a2pcio29ze0b8r0bc3nzp1kvsv3oi5h1ipp5nzfzrnrbtnxgxcezyu10m5cgr5fia5ho4ifc3702rerbpp3woqaqbdbpgfcdu8r6xdx5fazrjsvcs02cis24dy2osnd == \q\f\3\c\t\4\0\p\6\9\z\6\6\0\b\6\9\7\l\3\q\u\v\q\w\j\d\w\x\z\p\6\s\o\5\a\p\g\p\k\z\o\m\9\d\v\r\x\9\z\r\7\4\u\g\g\g\5\o\a\w\2\x\g\c\f\q\d\5\w\7\5\z\5\c\i\b\8\t\5\w\o\i\x\3\u\p\p\1\b\p\3\y\c\l\g\v\b\b\o\8\6\k\y\z\0\n\n\i\f\m\q\x\x\r\9\0\q\f\e\p\b\7\r\l\a\x\p\u\s\i\b\i\1\7\d\3\4\s\y\i\x\9\9\g\h\7\p\q\d\x\m\9\k\e\c\q\r\k\v\x\z\c\p\r\s\1\a\3\g\w\9\o\3\1\h\7\8\f\b\t\i\k\v\w\e\c\p\s\e\h\g\w\0\8\4\9\6\d\7\e\t\l\m\v\d\6\m\r\a\5\7\2\c\5\u\p\i\n\c\i\z\f\s\k\b\d\d\1\5\y\k\a\8\v\b\u\5\o\f\8\l\q\z\i\s\0\p\l\i\m\3\6\f\7\9\f\u\g\6\i\i\k\w\v\w\m\s\u\p\6\w\m\l\3\d\7\o\m\3\n\n\5\h\l\x\0\m\v\5\l\e\q\1\a\2\n\7\c\d\c\n\c\2\3\d\n\i\u\9\c\j\m\0\z\m\0\h\q\a\b\0\e\p\z\n\4\x\9\t\r\n\6\9\z\y\x\a\n\5\w\2\t\f\1\z\l\l\9\t\k\b\w\m\o\3\o\l\d\2\v\r\m\p\9\l\i\x\l\7\3\r\3\w\t\u\o\1\h\q\f\9\x\1\3\a\2\p\c\i\o\2\9\z\e\0\b\8\r\0\b\c\3\n\z\p\1\k\v\s\v\3\o\i\5\h\1\i\p\p\5\n\z\f\z\r\n\r\b\t\n\x\g\x\c\e\z\y\u\1\0\m\5\c\g\r\5\f\i\a\5\h\o\4\i\f\c\3\7\0\2\r\e\r\b\p\p\3\w\o\q\a\q\b\d\b\p\g\f\c\d\u\8\r\6\x\d\x\5\f\a\z\r\j\s\v\c\s\0\2\c\i\s\2\4\d\y\2\o\s\n\d ]] 00:27:07.609 10:52:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:07.609 10:52:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:07.609 [2024-07-24 10:52:34.116742] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:07.609 [2024-07-24 10:52:34.117193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145260 ] 00:27:07.609 [2024-07-24 10:52:34.261886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.931 [2024-07-24 10:52:34.348353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.190  Copying: 512/512 [B] (average 500 kBps) 00:27:08.191 00:27:08.191 10:52:34 -- dd/posix.sh@93 -- # [[ qf3ct40p69z660b697l3quvqwjdwxzp6so5apgpkzom9dvrx9zr74uggg5oaw2xgcfqd5w75z5cib8t5woix3upp1bp3yclgvbbo86kyz0nnifmqxxr90qfepb7rlaxpusibi17d34syix99gh7pqdxm9kecqrkvxzcprs1a3gw9o31h78fbtikvwecpsehgw08496d7etlmvd6mra572c5upincizfskbdd15yka8vbu5of8lqzis0plim36f79fug6iikwvwmsup6wml3d7om3nn5hlx0mv5leq1a2n7cdcnc23dniu9cjm0zm0hqab0epzn4x9trn69zyxan5w2tf1zll9tkbwmo3old2vrmp9lixl73r3wtuo1hqf9x13a2pcio29ze0b8r0bc3nzp1kvsv3oi5h1ipp5nzfzrnrbtnxgxcezyu10m5cgr5fia5ho4ifc3702rerbpp3woqaqbdbpgfcdu8r6xdx5fazrjsvcs02cis24dy2osnd == \q\f\3\c\t\4\0\p\6\9\z\6\6\0\b\6\9\7\l\3\q\u\v\q\w\j\d\w\x\z\p\6\s\o\5\a\p\g\p\k\z\o\m\9\d\v\r\x\9\z\r\7\4\u\g\g\g\5\o\a\w\2\x\g\c\f\q\d\5\w\7\5\z\5\c\i\b\8\t\5\w\o\i\x\3\u\p\p\1\b\p\3\y\c\l\g\v\b\b\o\8\6\k\y\z\0\n\n\i\f\m\q\x\x\r\9\0\q\f\e\p\b\7\r\l\a\x\p\u\s\i\b\i\1\7\d\3\4\s\y\i\x\9\9\g\h\7\p\q\d\x\m\9\k\e\c\q\r\k\v\x\z\c\p\r\s\1\a\3\g\w\9\o\3\1\h\7\8\f\b\t\i\k\v\w\e\c\p\s\e\h\g\w\0\8\4\9\6\d\7\e\t\l\m\v\d\6\m\r\a\5\7\2\c\5\u\p\i\n\c\i\z\f\s\k\b\d\d\1\5\y\k\a\8\v\b\u\5\o\f\8\l\q\z\i\s\0\p\l\i\m\3\6\f\7\9\f\u\g\6\i\i\k\w\v\w\m\s\u\p\6\w\m\l\3\d\7\o\m\3\n\n\5\h\l\x\0\m\v\5\l\e\q\1\a\2\n\7\c\d\c\n\c\2\3\d\n\i\u\9\c\j\m\0\z\m\0\h\q\a\b\0\e\p\z\n\4\x\9\t\r\n\6\9\z\y\x\a\n\5\w\2\t\f\1\z\l\l\9\t\k\b\w\m\o\3\o\l\d\2\v\r\m\p\9\l\i\x\l\7\3\r\3\w\t\u\o\1\h\q\f\9\x\1\3\a\2\p\c\i\o\2\9\z\e\0\b\8\r\0\b\c\3\n\z\p\1\k\v\s\v\3\o\i\5\h\1\i\p\p\5\n\z\f\z\r\n\r\b\t\n\x\g\x\c\e\z\y\u\1\0\m\5\c\g\r\5\f\i\a\5\h\o\4\i\f\c\3\7\0\2\r\e\r\b\p\p\3\w\o\q\a\q\b\d\b\p\g\f\c\d\u\8\r\6\x\d\x\5\f\a\z\r\j\s\v\c\s\0\2\c\i\s\2\4\d\y\2\o\s\n\d ]] 00:27:08.191 10:52:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:08.191 10:52:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:08.191 [2024-07-24 10:52:34.761136] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:08.191 [2024-07-24 10:52:34.761539] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145272 ] 00:27:08.450 [2024-07-24 10:52:34.901181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.450 [2024-07-24 10:52:34.999635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.710  Copying: 512/512 [B] (average 166 kBps) 00:27:08.710 00:27:08.711 10:52:35 -- dd/posix.sh@93 -- # [[ qf3ct40p69z660b697l3quvqwjdwxzp6so5apgpkzom9dvrx9zr74uggg5oaw2xgcfqd5w75z5cib8t5woix3upp1bp3yclgvbbo86kyz0nnifmqxxr90qfepb7rlaxpusibi17d34syix99gh7pqdxm9kecqrkvxzcprs1a3gw9o31h78fbtikvwecpsehgw08496d7etlmvd6mra572c5upincizfskbdd15yka8vbu5of8lqzis0plim36f79fug6iikwvwmsup6wml3d7om3nn5hlx0mv5leq1a2n7cdcnc23dniu9cjm0zm0hqab0epzn4x9trn69zyxan5w2tf1zll9tkbwmo3old2vrmp9lixl73r3wtuo1hqf9x13a2pcio29ze0b8r0bc3nzp1kvsv3oi5h1ipp5nzfzrnrbtnxgxcezyu10m5cgr5fia5ho4ifc3702rerbpp3woqaqbdbpgfcdu8r6xdx5fazrjsvcs02cis24dy2osnd == \q\f\3\c\t\4\0\p\6\9\z\6\6\0\b\6\9\7\l\3\q\u\v\q\w\j\d\w\x\z\p\6\s\o\5\a\p\g\p\k\z\o\m\9\d\v\r\x\9\z\r\7\4\u\g\g\g\5\o\a\w\2\x\g\c\f\q\d\5\w\7\5\z\5\c\i\b\8\t\5\w\o\i\x\3\u\p\p\1\b\p\3\y\c\l\g\v\b\b\o\8\6\k\y\z\0\n\n\i\f\m\q\x\x\r\9\0\q\f\e\p\b\7\r\l\a\x\p\u\s\i\b\i\1\7\d\3\4\s\y\i\x\9\9\g\h\7\p\q\d\x\m\9\k\e\c\q\r\k\v\x\z\c\p\r\s\1\a\3\g\w\9\o\3\1\h\7\8\f\b\t\i\k\v\w\e\c\p\s\e\h\g\w\0\8\4\9\6\d\7\e\t\l\m\v\d\6\m\r\a\5\7\2\c\5\u\p\i\n\c\i\z\f\s\k\b\d\d\1\5\y\k\a\8\v\b\u\5\o\f\8\l\q\z\i\s\0\p\l\i\m\3\6\f\7\9\f\u\g\6\i\i\k\w\v\w\m\s\u\p\6\w\m\l\3\d\7\o\m\3\n\n\5\h\l\x\0\m\v\5\l\e\q\1\a\2\n\7\c\d\c\n\c\2\3\d\n\i\u\9\c\j\m\0\z\m\0\h\q\a\b\0\e\p\z\n\4\x\9\t\r\n\6\9\z\y\x\a\n\5\w\2\t\f\1\z\l\l\9\t\k\b\w\m\o\3\o\l\d\2\v\r\m\p\9\l\i\x\l\7\3\r\3\w\t\u\o\1\h\q\f\9\x\1\3\a\2\p\c\i\o\2\9\z\e\0\b\8\r\0\b\c\3\n\z\p\1\k\v\s\v\3\o\i\5\h\1\i\p\p\5\n\z\f\z\r\n\r\b\t\n\x\g\x\c\e\z\y\u\1\0\m\5\c\g\r\5\f\i\a\5\h\o\4\i\f\c\3\7\0\2\r\e\r\b\p\p\3\w\o\q\a\q\b\d\b\p\g\f\c\d\u\8\r\6\x\d\x\5\f\a\z\r\j\s\v\c\s\0\2\c\i\s\2\4\d\y\2\o\s\n\d ]] 00:27:08.711 10:52:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:08.711 10:52:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:08.970 [2024-07-24 10:52:35.450900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:08.970 [2024-07-24 10:52:35.451263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145282 ] 00:27:08.970 [2024-07-24 10:52:35.600029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.229 [2024-07-24 10:52:35.676950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.488  Copying: 512/512 [B] (average 166 kBps) 00:27:09.488 00:27:09.488 10:52:36 -- dd/posix.sh@93 -- # [[ qf3ct40p69z660b697l3quvqwjdwxzp6so5apgpkzom9dvrx9zr74uggg5oaw2xgcfqd5w75z5cib8t5woix3upp1bp3yclgvbbo86kyz0nnifmqxxr90qfepb7rlaxpusibi17d34syix99gh7pqdxm9kecqrkvxzcprs1a3gw9o31h78fbtikvwecpsehgw08496d7etlmvd6mra572c5upincizfskbdd15yka8vbu5of8lqzis0plim36f79fug6iikwvwmsup6wml3d7om3nn5hlx0mv5leq1a2n7cdcnc23dniu9cjm0zm0hqab0epzn4x9trn69zyxan5w2tf1zll9tkbwmo3old2vrmp9lixl73r3wtuo1hqf9x13a2pcio29ze0b8r0bc3nzp1kvsv3oi5h1ipp5nzfzrnrbtnxgxcezyu10m5cgr5fia5ho4ifc3702rerbpp3woqaqbdbpgfcdu8r6xdx5fazrjsvcs02cis24dy2osnd == \q\f\3\c\t\4\0\p\6\9\z\6\6\0\b\6\9\7\l\3\q\u\v\q\w\j\d\w\x\z\p\6\s\o\5\a\p\g\p\k\z\o\m\9\d\v\r\x\9\z\r\7\4\u\g\g\g\5\o\a\w\2\x\g\c\f\q\d\5\w\7\5\z\5\c\i\b\8\t\5\w\o\i\x\3\u\p\p\1\b\p\3\y\c\l\g\v\b\b\o\8\6\k\y\z\0\n\n\i\f\m\q\x\x\r\9\0\q\f\e\p\b\7\r\l\a\x\p\u\s\i\b\i\1\7\d\3\4\s\y\i\x\9\9\g\h\7\p\q\d\x\m\9\k\e\c\q\r\k\v\x\z\c\p\r\s\1\a\3\g\w\9\o\3\1\h\7\8\f\b\t\i\k\v\w\e\c\p\s\e\h\g\w\0\8\4\9\6\d\7\e\t\l\m\v\d\6\m\r\a\5\7\2\c\5\u\p\i\n\c\i\z\f\s\k\b\d\d\1\5\y\k\a\8\v\b\u\5\o\f\8\l\q\z\i\s\0\p\l\i\m\3\6\f\7\9\f\u\g\6\i\i\k\w\v\w\m\s\u\p\6\w\m\l\3\d\7\o\m\3\n\n\5\h\l\x\0\m\v\5\l\e\q\1\a\2\n\7\c\d\c\n\c\2\3\d\n\i\u\9\c\j\m\0\z\m\0\h\q\a\b\0\e\p\z\n\4\x\9\t\r\n\6\9\z\y\x\a\n\5\w\2\t\f\1\z\l\l\9\t\k\b\w\m\o\3\o\l\d\2\v\r\m\p\9\l\i\x\l\7\3\r\3\w\t\u\o\1\h\q\f\9\x\1\3\a\2\p\c\i\o\2\9\z\e\0\b\8\r\0\b\c\3\n\z\p\1\k\v\s\v\3\o\i\5\h\1\i\p\p\5\n\z\f\z\r\n\r\b\t\n\x\g\x\c\e\z\y\u\1\0\m\5\c\g\r\5\f\i\a\5\h\o\4\i\f\c\3\7\0\2\r\e\r\b\p\p\3\w\o\q\a\q\b\d\b\p\g\f\c\d\u\8\r\6\x\d\x\5\f\a\z\r\j\s\v\c\s\0\2\c\i\s\2\4\d\y\2\o\s\n\d ]] 00:27:09.488 00:27:09.488 real 0m5.350s 00:27:09.488 user 0m2.718s 00:27:09.488 sys 0m1.519s 00:27:09.488 10:52:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.488 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:27:09.488 ************************************ 00:27:09.488 END TEST dd_flags_misc 00:27:09.488 ************************************ 00:27:09.488 10:52:36 -- dd/posix.sh@131 -- # tests_forced_aio 00:27:09.488 10:52:36 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:27:09.488 * Second test run, using AIO 00:27:09.488 10:52:36 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:27:09.488 10:52:36 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:27:09.488 10:52:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:09.488 10:52:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:09.488 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:27:09.488 ************************************ 00:27:09.488 START TEST dd_flag_append_forced_aio 00:27:09.488 ************************************ 00:27:09.488 10:52:36 -- common/autotest_common.sh@1104 -- # append 00:27:09.488 10:52:36 -- dd/posix.sh@16 -- # local dump0 00:27:09.488 10:52:36 -- dd/posix.sh@17 -- # local dump1 00:27:09.488 10:52:36 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:09.488 10:52:36 -- dd/common.sh@98 -- # xtrace_disable 00:27:09.488 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:27:09.488 10:52:36 -- dd/posix.sh@19 -- # dump0=bims0o9akz76e804gzbgqqgmewckbt19 00:27:09.488 10:52:36 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:09.488 10:52:36 -- dd/common.sh@98 -- # xtrace_disable 00:27:09.488 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:27:09.488 10:52:36 -- dd/posix.sh@20 -- # dump1=o6aak6ofigil6ep6o0sx40bp21t98mid 00:27:09.488 10:52:36 -- dd/posix.sh@22 -- # printf %s bims0o9akz76e804gzbgqqgmewckbt19 00:27:09.488 10:52:36 -- dd/posix.sh@23 -- # printf %s o6aak6ofigil6ep6o0sx40bp21t98mid 00:27:09.489 10:52:36 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:09.489 [2024-07-24 10:52:36.162815] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:09.489 [2024-07-24 10:52:36.163031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145315 ] 00:27:09.747 [2024-07-24 10:52:36.304027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.747 [2024-07-24 10:52:36.389401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.266  Copying: 32/32 [B] (average 31 kBps) 00:27:10.266 00:27:10.266 10:52:36 -- dd/posix.sh@27 -- # [[ o6aak6ofigil6ep6o0sx40bp21t98midbims0o9akz76e804gzbgqqgmewckbt19 == \o\6\a\a\k\6\o\f\i\g\i\l\6\e\p\6\o\0\s\x\4\0\b\p\2\1\t\9\8\m\i\d\b\i\m\s\0\o\9\a\k\z\7\6\e\8\0\4\g\z\b\g\q\q\g\m\e\w\c\k\b\t\1\9 ]] 00:27:10.266 00:27:10.266 real 0m0.671s 00:27:10.266 user 0m0.323s 00:27:10.266 sys 0m0.206s 00:27:10.266 10:52:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.266 ************************************ 00:27:10.266 END TEST dd_flag_append_forced_aio 00:27:10.266 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:27:10.266 ************************************ 00:27:10.266 10:52:36 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:27:10.266 10:52:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:10.266 10:52:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.266 10:52:36 -- common/autotest_common.sh@10 -- # set +x 00:27:10.266 ************************************ 00:27:10.266 START TEST dd_flag_directory_forced_aio 00:27:10.266 ************************************ 00:27:10.266 10:52:36 -- common/autotest_common.sh@1104 -- # directory 00:27:10.266 10:52:36 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:10.266 10:52:36 -- common/autotest_common.sh@640 -- # local es=0 00:27:10.266 10:52:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:10.266 10:52:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.266 10:52:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.266 10:52:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.266 10:52:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.266 10:52:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.266 10:52:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.266 10:52:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.266 10:52:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:10.266 10:52:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:10.266 [2024-07-24 10:52:36.902401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:10.266 [2024-07-24 10:52:36.902645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145356 ] 00:27:10.525 [2024-07-24 10:52:37.049536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.525 [2024-07-24 10:52:37.132654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.784 [2024-07-24 10:52:37.221968] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:10.784 [2024-07-24 10:52:37.222340] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:10.784 [2024-07-24 10:52:37.222442] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:10.784 [2024-07-24 10:52:37.349200] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:10.784 10:52:37 -- common/autotest_common.sh@643 -- # es=236 00:27:10.784 10:52:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:10.784 10:52:37 -- common/autotest_common.sh@652 -- # es=108 00:27:10.784 10:52:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:10.784 10:52:37 -- common/autotest_common.sh@660 -- # es=1 00:27:10.784 10:52:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:10.784 10:52:37 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:10.784 10:52:37 -- common/autotest_common.sh@640 -- # local es=0 00:27:10.784 10:52:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:10.784 10:52:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.784 10:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.784 10:52:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.784 10:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.784 10:52:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.784 10:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:10.784 10:52:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:10.784 10:52:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:10.784 10:52:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:11.043 [2024-07-24 10:52:37.524562] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:11.043 [2024-07-24 10:52:37.524829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145361 ] 00:27:11.043 [2024-07-24 10:52:37.670946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.302 [2024-07-24 10:52:37.760529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.302 [2024-07-24 10:52:37.848281] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:11.302 [2024-07-24 10:52:37.848624] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:11.302 [2024-07-24 10:52:37.848751] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:11.302 [2024-07-24 10:52:37.973142] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:11.560 10:52:38 -- common/autotest_common.sh@643 -- # es=236 00:27:11.560 10:52:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:11.560 10:52:38 -- common/autotest_common.sh@652 -- # es=108 00:27:11.560 10:52:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:11.560 10:52:38 -- common/autotest_common.sh@660 -- # es=1 00:27:11.560 10:52:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:11.560 00:27:11.560 real 0m1.259s 00:27:11.560 user 0m0.695s 00:27:11.560 sys 0m0.361s 00:27:11.560 10:52:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.560 10:52:38 -- common/autotest_common.sh@10 -- # set +x 00:27:11.560 ************************************ 00:27:11.560 END TEST dd_flag_directory_forced_aio 00:27:11.560 ************************************ 00:27:11.560 10:52:38 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:27:11.560 10:52:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:11.560 10:52:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:11.560 10:52:38 -- common/autotest_common.sh@10 -- # set +x 00:27:11.560 ************************************ 00:27:11.560 START TEST dd_flag_nofollow_forced_aio 00:27:11.560 ************************************ 00:27:11.560 10:52:38 -- common/autotest_common.sh@1104 -- # nofollow 00:27:11.560 10:52:38 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:11.560 10:52:38 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:11.560 10:52:38 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:11.560 10:52:38 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:11.560 10:52:38 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:11.560 10:52:38 -- common/autotest_common.sh@640 -- # local es=0 00:27:11.560 10:52:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:11.560 10:52:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.560 10:52:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:11.560 10:52:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.560 10:52:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:11.560 10:52:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.560 10:52:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:11.561 10:52:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.561 10:52:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:11.561 10:52:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:11.561 [2024-07-24 10:52:38.210134] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:11.561 [2024-07-24 10:52:38.210542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145403 ] 00:27:11.819 [2024-07-24 10:52:38.348415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.819 [2024-07-24 10:52:38.425140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.078 [2024-07-24 10:52:38.512156] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:12.078 [2024-07-24 10:52:38.512504] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:12.078 [2024-07-24 10:52:38.512620] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:12.078 [2024-07-24 10:52:38.633678] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:12.078 10:52:38 -- common/autotest_common.sh@643 -- # es=216 00:27:12.078 10:52:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:12.078 10:52:38 -- common/autotest_common.sh@652 -- # es=88 00:27:12.078 10:52:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:12.078 10:52:38 -- common/autotest_common.sh@660 -- # es=1 00:27:12.078 10:52:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:12.078 10:52:38 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:12.078 10:52:38 -- common/autotest_common.sh@640 -- # local es=0 00:27:12.078 10:52:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:12.078 10:52:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.078 10:52:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:12.078 10:52:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.078 10:52:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:12.078 10:52:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.078 10:52:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:12.078 10:52:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:12.078 10:52:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:12.078 10:52:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:12.337 [2024-07-24 10:52:38.806965] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:12.337 [2024-07-24 10:52:38.807248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145412 ] 00:27:12.337 [2024-07-24 10:52:38.955995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.596 [2024-07-24 10:52:39.040876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.596 [2024-07-24 10:52:39.124570] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:12.596 [2024-07-24 10:52:39.124914] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:12.596 [2024-07-24 10:52:39.125018] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:12.596 [2024-07-24 10:52:39.248177] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:12.854 10:52:39 -- common/autotest_common.sh@643 -- # es=216 00:27:12.854 10:52:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:12.854 10:52:39 -- common/autotest_common.sh@652 -- # es=88 00:27:12.854 10:52:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:12.854 10:52:39 -- common/autotest_common.sh@660 -- # es=1 00:27:12.854 10:52:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:12.854 10:52:39 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:12.854 10:52:39 -- dd/common.sh@98 -- # xtrace_disable 00:27:12.854 10:52:39 -- common/autotest_common.sh@10 -- # set +x 00:27:12.854 10:52:39 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:12.854 [2024-07-24 10:52:39.439372] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:12.854 [2024-07-24 10:52:39.439686] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145427 ] 00:27:13.113 [2024-07-24 10:52:39.585013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.113 [2024-07-24 10:52:39.651866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.372  Copying: 512/512 [B] (average 500 kBps) 00:27:13.372 00:27:13.372 10:52:40 -- dd/posix.sh@49 -- # [[ rpoqv8iu3o18r2uigx7kxdrw92sie6k51xdojm2cipvdvdldjd64z8he9ep82kn300ngogncp5izbaq4gs5issolvpqar40sfdale6ku6ywem8e082nhx98dfn5wq292p74pvu0wq6pf4g1idb8s09erbyput5auvm7c5fr9e11b03v80pgskkr61qp6v7264lm7z68mnw7gwv1oubqgxg44ppd0v8rct30bmfc0vukdwc6urgtdos9syeo5hwdzqbzue3tp1ew8zyuhqtjlke96zbhw42cijjy93e70l760trf6j573kcaea6g7eex2w01cxna8y4w2pu0tmz76ucx62rw69l92kuty8mbhg7bob0cnmyos924m0w2suahu94ug97qmjflvf0hm1e22ftjdbp7igfpbm1x08wdhhume02f9fl1y218cvphnr6uugxsf7j3uak14gpptguvs0s6e5kcxx53q6k0c11zrplbci7vhsad1x7vnx08fhnfr == \r\p\o\q\v\8\i\u\3\o\1\8\r\2\u\i\g\x\7\k\x\d\r\w\9\2\s\i\e\6\k\5\1\x\d\o\j\m\2\c\i\p\v\d\v\d\l\d\j\d\6\4\z\8\h\e\9\e\p\8\2\k\n\3\0\0\n\g\o\g\n\c\p\5\i\z\b\a\q\4\g\s\5\i\s\s\o\l\v\p\q\a\r\4\0\s\f\d\a\l\e\6\k\u\6\y\w\e\m\8\e\0\8\2\n\h\x\9\8\d\f\n\5\w\q\2\9\2\p\7\4\p\v\u\0\w\q\6\p\f\4\g\1\i\d\b\8\s\0\9\e\r\b\y\p\u\t\5\a\u\v\m\7\c\5\f\r\9\e\1\1\b\0\3\v\8\0\p\g\s\k\k\r\6\1\q\p\6\v\7\2\6\4\l\m\7\z\6\8\m\n\w\7\g\w\v\1\o\u\b\q\g\x\g\4\4\p\p\d\0\v\8\r\c\t\3\0\b\m\f\c\0\v\u\k\d\w\c\6\u\r\g\t\d\o\s\9\s\y\e\o\5\h\w\d\z\q\b\z\u\e\3\t\p\1\e\w\8\z\y\u\h\q\t\j\l\k\e\9\6\z\b\h\w\4\2\c\i\j\j\y\9\3\e\7\0\l\7\6\0\t\r\f\6\j\5\7\3\k\c\a\e\a\6\g\7\e\e\x\2\w\0\1\c\x\n\a\8\y\4\w\2\p\u\0\t\m\z\7\6\u\c\x\6\2\r\w\6\9\l\9\2\k\u\t\y\8\m\b\h\g\7\b\o\b\0\c\n\m\y\o\s\9\2\4\m\0\w\2\s\u\a\h\u\9\4\u\g\9\7\q\m\j\f\l\v\f\0\h\m\1\e\2\2\f\t\j\d\b\p\7\i\g\f\p\b\m\1\x\0\8\w\d\h\h\u\m\e\0\2\f\9\f\l\1\y\2\1\8\c\v\p\h\n\r\6\u\u\g\x\s\f\7\j\3\u\a\k\1\4\g\p\p\t\g\u\v\s\0\s\6\e\5\k\c\x\x\5\3\q\6\k\0\c\1\1\z\r\p\l\b\c\i\7\v\h\s\a\d\1\x\7\v\n\x\0\8\f\h\n\f\r ]] 00:27:13.372 00:27:13.372 real 0m1.890s 00:27:13.372 user 0m0.956s 00:27:13.372 sys 0m0.589s 00:27:13.372 10:52:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.372 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:27:13.372 ************************************ 00:27:13.372 END TEST dd_flag_nofollow_forced_aio 00:27:13.372 ************************************ 00:27:13.630 10:52:40 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:27:13.630 10:52:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.630 10:52:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.630 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:27:13.630 ************************************ 00:27:13.630 START TEST dd_flag_noatime_forced_aio 00:27:13.630 ************************************ 00:27:13.630 10:52:40 -- common/autotest_common.sh@1104 -- # noatime 00:27:13.630 10:52:40 -- dd/posix.sh@53 -- # local atime_if 00:27:13.630 10:52:40 -- dd/posix.sh@54 -- # local atime_of 00:27:13.630 10:52:40 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:13.630 10:52:40 -- dd/common.sh@98 -- # xtrace_disable 00:27:13.630 10:52:40 -- common/autotest_common.sh@10 -- # set +x 00:27:13.630 10:52:40 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:13.630 10:52:40 -- dd/posix.sh@60 -- # atime_if=1721818359 00:27:13.630 10:52:40 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:13.630 10:52:40 -- dd/posix.sh@61 -- # atime_of=1721818360 00:27:13.630 10:52:40 -- dd/posix.sh@66 -- # sleep 1 00:27:14.566 10:52:41 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:14.566 [2024-07-24 10:52:41.175850] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:14.566 [2024-07-24 10:52:41.176109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145481 ] 00:27:14.825 [2024-07-24 10:52:41.319694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.825 [2024-07-24 10:52:41.394662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.084  Copying: 512/512 [B] (average 500 kBps) 00:27:15.084 00:27:15.343 10:52:41 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:15.343 10:52:41 -- dd/posix.sh@69 -- # (( atime_if == 1721818359 )) 00:27:15.343 10:52:41 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:15.343 10:52:41 -- dd/posix.sh@70 -- # (( atime_of == 1721818360 )) 00:27:15.343 10:52:41 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:15.343 [2024-07-24 10:52:41.843566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:15.343 [2024-07-24 10:52:41.844428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145493 ] 00:27:15.343 [2024-07-24 10:52:41.992237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.639 [2024-07-24 10:52:42.061273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.941  Copying: 512/512 [B] (average 500 kBps) 00:27:15.941 00:27:15.941 10:52:42 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:15.941 10:52:42 -- dd/posix.sh@73 -- # (( atime_if < 1721818362 )) 00:27:15.941 00:27:15.941 real 0m2.338s 00:27:15.941 user 0m0.647s 00:27:15.941 sys 0m0.413s 00:27:15.941 10:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.941 ************************************ 00:27:15.941 END TEST dd_flag_noatime_forced_aio 00:27:15.941 ************************************ 00:27:15.941 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:27:15.941 10:52:42 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:27:15.941 10:52:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:15.941 10:52:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:15.941 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:27:15.941 ************************************ 00:27:15.941 START TEST dd_flags_misc_forced_aio 00:27:15.941 ************************************ 00:27:15.941 10:52:42 -- common/autotest_common.sh@1104 -- # io 00:27:15.941 10:52:42 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:15.941 10:52:42 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:15.941 10:52:42 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:15.941 10:52:42 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:15.941 10:52:42 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:15.941 10:52:42 -- dd/common.sh@98 -- # xtrace_disable 00:27:15.941 10:52:42 -- common/autotest_common.sh@10 -- # set +x 00:27:15.941 10:52:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:15.941 10:52:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:15.941 [2024-07-24 10:52:42.563417] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:15.941 [2024-07-24 10:52:42.563705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145524 ] 00:27:16.201 [2024-07-24 10:52:42.711479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.201 [2024-07-24 10:52:42.781579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.459  Copying: 512/512 [B] (average 500 kBps) 00:27:16.459 00:27:16.718 10:52:43 -- dd/posix.sh@93 -- # [[ 2lwzj4puf2ogpprb28f2amk4j173ofc19zectwl70frbrv6saiik2w1j9q97uxp18bl2szkgnypqi1p1ap1sxtneu891gpxqbnzdp7oblpfkc02a0vr4h3251ulpv2lgtrko04f6uxefwzgdrtbshfwzu4jey9503i4nvttp7h7mbqj4q9lweealpwpoazs8ymmkudg164lzuv5j831kb1rqoos413ce5tcadh3rj4vis06gj89sf7i7idelu8oa392s8u4spuqrbxd1wc7od97t4mtus9u34zvpqnkz8ges4c38vblez5yhbvs4qmv0zt9xwlqac2frv5cvnp9c2z1qzxvjbw3ym6ussp9dzenfkne3spj6rw381mrv2atohzhlfw3d6neaec5abebowc4rnr16sco9v5xo41ixjq2153x536kx6lp0kveb2axyhuyif47hzty3pvs07qjgjs1511ikefttjndgmn71bii99gbpgsjjts4ywv2vtg5g == \2\l\w\z\j\4\p\u\f\2\o\g\p\p\r\b\2\8\f\2\a\m\k\4\j\1\7\3\o\f\c\1\9\z\e\c\t\w\l\7\0\f\r\b\r\v\6\s\a\i\i\k\2\w\1\j\9\q\9\7\u\x\p\1\8\b\l\2\s\z\k\g\n\y\p\q\i\1\p\1\a\p\1\s\x\t\n\e\u\8\9\1\g\p\x\q\b\n\z\d\p\7\o\b\l\p\f\k\c\0\2\a\0\v\r\4\h\3\2\5\1\u\l\p\v\2\l\g\t\r\k\o\0\4\f\6\u\x\e\f\w\z\g\d\r\t\b\s\h\f\w\z\u\4\j\e\y\9\5\0\3\i\4\n\v\t\t\p\7\h\7\m\b\q\j\4\q\9\l\w\e\e\a\l\p\w\p\o\a\z\s\8\y\m\m\k\u\d\g\1\6\4\l\z\u\v\5\j\8\3\1\k\b\1\r\q\o\o\s\4\1\3\c\e\5\t\c\a\d\h\3\r\j\4\v\i\s\0\6\g\j\8\9\s\f\7\i\7\i\d\e\l\u\8\o\a\3\9\2\s\8\u\4\s\p\u\q\r\b\x\d\1\w\c\7\o\d\9\7\t\4\m\t\u\s\9\u\3\4\z\v\p\q\n\k\z\8\g\e\s\4\c\3\8\v\b\l\e\z\5\y\h\b\v\s\4\q\m\v\0\z\t\9\x\w\l\q\a\c\2\f\r\v\5\c\v\n\p\9\c\2\z\1\q\z\x\v\j\b\w\3\y\m\6\u\s\s\p\9\d\z\e\n\f\k\n\e\3\s\p\j\6\r\w\3\8\1\m\r\v\2\a\t\o\h\z\h\l\f\w\3\d\6\n\e\a\e\c\5\a\b\e\b\o\w\c\4\r\n\r\1\6\s\c\o\9\v\5\x\o\4\1\i\x\j\q\2\1\5\3\x\5\3\6\k\x\6\l\p\0\k\v\e\b\2\a\x\y\h\u\y\i\f\4\7\h\z\t\y\3\p\v\s\0\7\q\j\g\j\s\1\5\1\1\i\k\e\f\t\t\j\n\d\g\m\n\7\1\b\i\i\9\9\g\b\p\g\s\j\j\t\s\4\y\w\v\2\v\t\g\5\g ]] 00:27:16.718 10:52:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:16.719 10:52:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:16.719 [2024-07-24 10:52:43.205012] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:16.719 [2024-07-24 10:52:43.205784] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145545 ] 00:27:16.719 [2024-07-24 10:52:43.351525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.977 [2024-07-24 10:52:43.417123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.235  Copying: 512/512 [B] (average 500 kBps) 00:27:17.235 00:27:17.235 10:52:43 -- dd/posix.sh@93 -- # [[ 2lwzj4puf2ogpprb28f2amk4j173ofc19zectwl70frbrv6saiik2w1j9q97uxp18bl2szkgnypqi1p1ap1sxtneu891gpxqbnzdp7oblpfkc02a0vr4h3251ulpv2lgtrko04f6uxefwzgdrtbshfwzu4jey9503i4nvttp7h7mbqj4q9lweealpwpoazs8ymmkudg164lzuv5j831kb1rqoos413ce5tcadh3rj4vis06gj89sf7i7idelu8oa392s8u4spuqrbxd1wc7od97t4mtus9u34zvpqnkz8ges4c38vblez5yhbvs4qmv0zt9xwlqac2frv5cvnp9c2z1qzxvjbw3ym6ussp9dzenfkne3spj6rw381mrv2atohzhlfw3d6neaec5abebowc4rnr16sco9v5xo41ixjq2153x536kx6lp0kveb2axyhuyif47hzty3pvs07qjgjs1511ikefttjndgmn71bii99gbpgsjjts4ywv2vtg5g == \2\l\w\z\j\4\p\u\f\2\o\g\p\p\r\b\2\8\f\2\a\m\k\4\j\1\7\3\o\f\c\1\9\z\e\c\t\w\l\7\0\f\r\b\r\v\6\s\a\i\i\k\2\w\1\j\9\q\9\7\u\x\p\1\8\b\l\2\s\z\k\g\n\y\p\q\i\1\p\1\a\p\1\s\x\t\n\e\u\8\9\1\g\p\x\q\b\n\z\d\p\7\o\b\l\p\f\k\c\0\2\a\0\v\r\4\h\3\2\5\1\u\l\p\v\2\l\g\t\r\k\o\0\4\f\6\u\x\e\f\w\z\g\d\r\t\b\s\h\f\w\z\u\4\j\e\y\9\5\0\3\i\4\n\v\t\t\p\7\h\7\m\b\q\j\4\q\9\l\w\e\e\a\l\p\w\p\o\a\z\s\8\y\m\m\k\u\d\g\1\6\4\l\z\u\v\5\j\8\3\1\k\b\1\r\q\o\o\s\4\1\3\c\e\5\t\c\a\d\h\3\r\j\4\v\i\s\0\6\g\j\8\9\s\f\7\i\7\i\d\e\l\u\8\o\a\3\9\2\s\8\u\4\s\p\u\q\r\b\x\d\1\w\c\7\o\d\9\7\t\4\m\t\u\s\9\u\3\4\z\v\p\q\n\k\z\8\g\e\s\4\c\3\8\v\b\l\e\z\5\y\h\b\v\s\4\q\m\v\0\z\t\9\x\w\l\q\a\c\2\f\r\v\5\c\v\n\p\9\c\2\z\1\q\z\x\v\j\b\w\3\y\m\6\u\s\s\p\9\d\z\e\n\f\k\n\e\3\s\p\j\6\r\w\3\8\1\m\r\v\2\a\t\o\h\z\h\l\f\w\3\d\6\n\e\a\e\c\5\a\b\e\b\o\w\c\4\r\n\r\1\6\s\c\o\9\v\5\x\o\4\1\i\x\j\q\2\1\5\3\x\5\3\6\k\x\6\l\p\0\k\v\e\b\2\a\x\y\h\u\y\i\f\4\7\h\z\t\y\3\p\v\s\0\7\q\j\g\j\s\1\5\1\1\i\k\e\f\t\t\j\n\d\g\m\n\7\1\b\i\i\9\9\g\b\p\g\s\j\j\t\s\4\y\w\v\2\v\t\g\5\g ]] 00:27:17.235 10:52:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:17.235 10:52:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:17.235 [2024-07-24 10:52:43.832457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:17.235 [2024-07-24 10:52:43.832728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145550 ] 00:27:17.493 [2024-07-24 10:52:43.979002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.493 [2024-07-24 10:52:44.058539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.752  Copying: 512/512 [B] (average 125 kBps) 00:27:17.752 00:27:18.011 10:52:44 -- dd/posix.sh@93 -- # [[ 2lwzj4puf2ogpprb28f2amk4j173ofc19zectwl70frbrv6saiik2w1j9q97uxp18bl2szkgnypqi1p1ap1sxtneu891gpxqbnzdp7oblpfkc02a0vr4h3251ulpv2lgtrko04f6uxefwzgdrtbshfwzu4jey9503i4nvttp7h7mbqj4q9lweealpwpoazs8ymmkudg164lzuv5j831kb1rqoos413ce5tcadh3rj4vis06gj89sf7i7idelu8oa392s8u4spuqrbxd1wc7od97t4mtus9u34zvpqnkz8ges4c38vblez5yhbvs4qmv0zt9xwlqac2frv5cvnp9c2z1qzxvjbw3ym6ussp9dzenfkne3spj6rw381mrv2atohzhlfw3d6neaec5abebowc4rnr16sco9v5xo41ixjq2153x536kx6lp0kveb2axyhuyif47hzty3pvs07qjgjs1511ikefttjndgmn71bii99gbpgsjjts4ywv2vtg5g == \2\l\w\z\j\4\p\u\f\2\o\g\p\p\r\b\2\8\f\2\a\m\k\4\j\1\7\3\o\f\c\1\9\z\e\c\t\w\l\7\0\f\r\b\r\v\6\s\a\i\i\k\2\w\1\j\9\q\9\7\u\x\p\1\8\b\l\2\s\z\k\g\n\y\p\q\i\1\p\1\a\p\1\s\x\t\n\e\u\8\9\1\g\p\x\q\b\n\z\d\p\7\o\b\l\p\f\k\c\0\2\a\0\v\r\4\h\3\2\5\1\u\l\p\v\2\l\g\t\r\k\o\0\4\f\6\u\x\e\f\w\z\g\d\r\t\b\s\h\f\w\z\u\4\j\e\y\9\5\0\3\i\4\n\v\t\t\p\7\h\7\m\b\q\j\4\q\9\l\w\e\e\a\l\p\w\p\o\a\z\s\8\y\m\m\k\u\d\g\1\6\4\l\z\u\v\5\j\8\3\1\k\b\1\r\q\o\o\s\4\1\3\c\e\5\t\c\a\d\h\3\r\j\4\v\i\s\0\6\g\j\8\9\s\f\7\i\7\i\d\e\l\u\8\o\a\3\9\2\s\8\u\4\s\p\u\q\r\b\x\d\1\w\c\7\o\d\9\7\t\4\m\t\u\s\9\u\3\4\z\v\p\q\n\k\z\8\g\e\s\4\c\3\8\v\b\l\e\z\5\y\h\b\v\s\4\q\m\v\0\z\t\9\x\w\l\q\a\c\2\f\r\v\5\c\v\n\p\9\c\2\z\1\q\z\x\v\j\b\w\3\y\m\6\u\s\s\p\9\d\z\e\n\f\k\n\e\3\s\p\j\6\r\w\3\8\1\m\r\v\2\a\t\o\h\z\h\l\f\w\3\d\6\n\e\a\e\c\5\a\b\e\b\o\w\c\4\r\n\r\1\6\s\c\o\9\v\5\x\o\4\1\i\x\j\q\2\1\5\3\x\5\3\6\k\x\6\l\p\0\k\v\e\b\2\a\x\y\h\u\y\i\f\4\7\h\z\t\y\3\p\v\s\0\7\q\j\g\j\s\1\5\1\1\i\k\e\f\t\t\j\n\d\g\m\n\7\1\b\i\i\9\9\g\b\p\g\s\j\j\t\s\4\y\w\v\2\v\t\g\5\g ]] 00:27:18.011 10:52:44 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:18.011 10:52:44 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:18.011 [2024-07-24 10:52:44.491900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:18.011 [2024-07-24 10:52:44.492735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145567 ] 00:27:18.011 [2024-07-24 10:52:44.639652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.270 [2024-07-24 10:52:44.705685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.529  Copying: 512/512 [B] (average 166 kBps) 00:27:18.529 00:27:18.529 10:52:45 -- dd/posix.sh@93 -- # [[ 2lwzj4puf2ogpprb28f2amk4j173ofc19zectwl70frbrv6saiik2w1j9q97uxp18bl2szkgnypqi1p1ap1sxtneu891gpxqbnzdp7oblpfkc02a0vr4h3251ulpv2lgtrko04f6uxefwzgdrtbshfwzu4jey9503i4nvttp7h7mbqj4q9lweealpwpoazs8ymmkudg164lzuv5j831kb1rqoos413ce5tcadh3rj4vis06gj89sf7i7idelu8oa392s8u4spuqrbxd1wc7od97t4mtus9u34zvpqnkz8ges4c38vblez5yhbvs4qmv0zt9xwlqac2frv5cvnp9c2z1qzxvjbw3ym6ussp9dzenfkne3spj6rw381mrv2atohzhlfw3d6neaec5abebowc4rnr16sco9v5xo41ixjq2153x536kx6lp0kveb2axyhuyif47hzty3pvs07qjgjs1511ikefttjndgmn71bii99gbpgsjjts4ywv2vtg5g == \2\l\w\z\j\4\p\u\f\2\o\g\p\p\r\b\2\8\f\2\a\m\k\4\j\1\7\3\o\f\c\1\9\z\e\c\t\w\l\7\0\f\r\b\r\v\6\s\a\i\i\k\2\w\1\j\9\q\9\7\u\x\p\1\8\b\l\2\s\z\k\g\n\y\p\q\i\1\p\1\a\p\1\s\x\t\n\e\u\8\9\1\g\p\x\q\b\n\z\d\p\7\o\b\l\p\f\k\c\0\2\a\0\v\r\4\h\3\2\5\1\u\l\p\v\2\l\g\t\r\k\o\0\4\f\6\u\x\e\f\w\z\g\d\r\t\b\s\h\f\w\z\u\4\j\e\y\9\5\0\3\i\4\n\v\t\t\p\7\h\7\m\b\q\j\4\q\9\l\w\e\e\a\l\p\w\p\o\a\z\s\8\y\m\m\k\u\d\g\1\6\4\l\z\u\v\5\j\8\3\1\k\b\1\r\q\o\o\s\4\1\3\c\e\5\t\c\a\d\h\3\r\j\4\v\i\s\0\6\g\j\8\9\s\f\7\i\7\i\d\e\l\u\8\o\a\3\9\2\s\8\u\4\s\p\u\q\r\b\x\d\1\w\c\7\o\d\9\7\t\4\m\t\u\s\9\u\3\4\z\v\p\q\n\k\z\8\g\e\s\4\c\3\8\v\b\l\e\z\5\y\h\b\v\s\4\q\m\v\0\z\t\9\x\w\l\q\a\c\2\f\r\v\5\c\v\n\p\9\c\2\z\1\q\z\x\v\j\b\w\3\y\m\6\u\s\s\p\9\d\z\e\n\f\k\n\e\3\s\p\j\6\r\w\3\8\1\m\r\v\2\a\t\o\h\z\h\l\f\w\3\d\6\n\e\a\e\c\5\a\b\e\b\o\w\c\4\r\n\r\1\6\s\c\o\9\v\5\x\o\4\1\i\x\j\q\2\1\5\3\x\5\3\6\k\x\6\l\p\0\k\v\e\b\2\a\x\y\h\u\y\i\f\4\7\h\z\t\y\3\p\v\s\0\7\q\j\g\j\s\1\5\1\1\i\k\e\f\t\t\j\n\d\g\m\n\7\1\b\i\i\9\9\g\b\p\g\s\j\j\t\s\4\y\w\v\2\v\t\g\5\g ]] 00:27:18.529 10:52:45 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:18.529 10:52:45 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:18.529 10:52:45 -- dd/common.sh@98 -- # xtrace_disable 00:27:18.529 10:52:45 -- common/autotest_common.sh@10 -- # set +x 00:27:18.529 10:52:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:18.529 10:52:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:18.529 [2024-07-24 10:52:45.167783] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:18.529 [2024-07-24 10:52:45.168359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145583 ] 00:27:18.788 [2024-07-24 10:52:45.320354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.788 [2024-07-24 10:52:45.389710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.356  Copying: 512/512 [B] (average 500 kBps) 00:27:19.356 00:27:19.356 10:52:45 -- dd/posix.sh@93 -- # [[ 900i3l3vkmlh29tl9glfi7gbausm73bojrqwsmwpszgvov72i9fd48zbmkpj7qw4fpawoj0t3vpmg6v7e1amjkdrwksyimzmsg7blpjrmpcxm8k10vf4gonjf2b5neochhqub1e678ymvh4344iul7n72caqghhhbbu9yn9j7nra3aynnyku6nxrstr6hlf1tg1gr5yz8diy1cg1074jbwgdwcf0fk2hj7t6ybp43pggwkhmt1myr058hkb7duh9ti20qb86hpqzu93wmcpx2nl7ubxeah6es018kuzbv8ibqvzb382shbzhrazglupnythbae7ny3ary2uicwptto4xy2ym87erasybdt3etv8z28igf6irrozwictcibls21rdkw0uowq70qobuggawxl39gsl8v2vrbn654bq7mwym5hkwn9aolxnc3bp627gnw4uip704vsosptpy38y9nfrank0is27vp87ss7h7c30ff5t03ez8rn8iji48gdu == \9\0\0\i\3\l\3\v\k\m\l\h\2\9\t\l\9\g\l\f\i\7\g\b\a\u\s\m\7\3\b\o\j\r\q\w\s\m\w\p\s\z\g\v\o\v\7\2\i\9\f\d\4\8\z\b\m\k\p\j\7\q\w\4\f\p\a\w\o\j\0\t\3\v\p\m\g\6\v\7\e\1\a\m\j\k\d\r\w\k\s\y\i\m\z\m\s\g\7\b\l\p\j\r\m\p\c\x\m\8\k\1\0\v\f\4\g\o\n\j\f\2\b\5\n\e\o\c\h\h\q\u\b\1\e\6\7\8\y\m\v\h\4\3\4\4\i\u\l\7\n\7\2\c\a\q\g\h\h\h\b\b\u\9\y\n\9\j\7\n\r\a\3\a\y\n\n\y\k\u\6\n\x\r\s\t\r\6\h\l\f\1\t\g\1\g\r\5\y\z\8\d\i\y\1\c\g\1\0\7\4\j\b\w\g\d\w\c\f\0\f\k\2\h\j\7\t\6\y\b\p\4\3\p\g\g\w\k\h\m\t\1\m\y\r\0\5\8\h\k\b\7\d\u\h\9\t\i\2\0\q\b\8\6\h\p\q\z\u\9\3\w\m\c\p\x\2\n\l\7\u\b\x\e\a\h\6\e\s\0\1\8\k\u\z\b\v\8\i\b\q\v\z\b\3\8\2\s\h\b\z\h\r\a\z\g\l\u\p\n\y\t\h\b\a\e\7\n\y\3\a\r\y\2\u\i\c\w\p\t\t\o\4\x\y\2\y\m\8\7\e\r\a\s\y\b\d\t\3\e\t\v\8\z\2\8\i\g\f\6\i\r\r\o\z\w\i\c\t\c\i\b\l\s\2\1\r\d\k\w\0\u\o\w\q\7\0\q\o\b\u\g\g\a\w\x\l\3\9\g\s\l\8\v\2\v\r\b\n\6\5\4\b\q\7\m\w\y\m\5\h\k\w\n\9\a\o\l\x\n\c\3\b\p\6\2\7\g\n\w\4\u\i\p\7\0\4\v\s\o\s\p\t\p\y\3\8\y\9\n\f\r\a\n\k\0\i\s\2\7\v\p\8\7\s\s\7\h\7\c\3\0\f\f\5\t\0\3\e\z\8\r\n\8\i\j\i\4\8\g\d\u ]] 00:27:19.356 10:52:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:19.356 10:52:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:19.356 [2024-07-24 10:52:45.817010] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:19.356 [2024-07-24 10:52:45.817798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145589 ] 00:27:19.356 [2024-07-24 10:52:45.964628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.356 [2024-07-24 10:52:46.025042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.882  Copying: 512/512 [B] (average 500 kBps) 00:27:19.882 00:27:19.882 10:52:46 -- dd/posix.sh@93 -- # [[ 900i3l3vkmlh29tl9glfi7gbausm73bojrqwsmwpszgvov72i9fd48zbmkpj7qw4fpawoj0t3vpmg6v7e1amjkdrwksyimzmsg7blpjrmpcxm8k10vf4gonjf2b5neochhqub1e678ymvh4344iul7n72caqghhhbbu9yn9j7nra3aynnyku6nxrstr6hlf1tg1gr5yz8diy1cg1074jbwgdwcf0fk2hj7t6ybp43pggwkhmt1myr058hkb7duh9ti20qb86hpqzu93wmcpx2nl7ubxeah6es018kuzbv8ibqvzb382shbzhrazglupnythbae7ny3ary2uicwptto4xy2ym87erasybdt3etv8z28igf6irrozwictcibls21rdkw0uowq70qobuggawxl39gsl8v2vrbn654bq7mwym5hkwn9aolxnc3bp627gnw4uip704vsosptpy38y9nfrank0is27vp87ss7h7c30ff5t03ez8rn8iji48gdu == \9\0\0\i\3\l\3\v\k\m\l\h\2\9\t\l\9\g\l\f\i\7\g\b\a\u\s\m\7\3\b\o\j\r\q\w\s\m\w\p\s\z\g\v\o\v\7\2\i\9\f\d\4\8\z\b\m\k\p\j\7\q\w\4\f\p\a\w\o\j\0\t\3\v\p\m\g\6\v\7\e\1\a\m\j\k\d\r\w\k\s\y\i\m\z\m\s\g\7\b\l\p\j\r\m\p\c\x\m\8\k\1\0\v\f\4\g\o\n\j\f\2\b\5\n\e\o\c\h\h\q\u\b\1\e\6\7\8\y\m\v\h\4\3\4\4\i\u\l\7\n\7\2\c\a\q\g\h\h\h\b\b\u\9\y\n\9\j\7\n\r\a\3\a\y\n\n\y\k\u\6\n\x\r\s\t\r\6\h\l\f\1\t\g\1\g\r\5\y\z\8\d\i\y\1\c\g\1\0\7\4\j\b\w\g\d\w\c\f\0\f\k\2\h\j\7\t\6\y\b\p\4\3\p\g\g\w\k\h\m\t\1\m\y\r\0\5\8\h\k\b\7\d\u\h\9\t\i\2\0\q\b\8\6\h\p\q\z\u\9\3\w\m\c\p\x\2\n\l\7\u\b\x\e\a\h\6\e\s\0\1\8\k\u\z\b\v\8\i\b\q\v\z\b\3\8\2\s\h\b\z\h\r\a\z\g\l\u\p\n\y\t\h\b\a\e\7\n\y\3\a\r\y\2\u\i\c\w\p\t\t\o\4\x\y\2\y\m\8\7\e\r\a\s\y\b\d\t\3\e\t\v\8\z\2\8\i\g\f\6\i\r\r\o\z\w\i\c\t\c\i\b\l\s\2\1\r\d\k\w\0\u\o\w\q\7\0\q\o\b\u\g\g\a\w\x\l\3\9\g\s\l\8\v\2\v\r\b\n\6\5\4\b\q\7\m\w\y\m\5\h\k\w\n\9\a\o\l\x\n\c\3\b\p\6\2\7\g\n\w\4\u\i\p\7\0\4\v\s\o\s\p\t\p\y\3\8\y\9\n\f\r\a\n\k\0\i\s\2\7\v\p\8\7\s\s\7\h\7\c\3\0\f\f\5\t\0\3\e\z\8\r\n\8\i\j\i\4\8\g\d\u ]] 00:27:19.882 10:52:46 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:19.882 10:52:46 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:19.882 [2024-07-24 10:52:46.451092] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:19.882 [2024-07-24 10:52:46.451378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145605 ] 00:27:20.144 [2024-07-24 10:52:46.600661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.144 [2024-07-24 10:52:46.664750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.403  Copying: 512/512 [B] (average 166 kBps) 00:27:20.403 00:27:20.403 10:52:47 -- dd/posix.sh@93 -- # [[ 900i3l3vkmlh29tl9glfi7gbausm73bojrqwsmwpszgvov72i9fd48zbmkpj7qw4fpawoj0t3vpmg6v7e1amjkdrwksyimzmsg7blpjrmpcxm8k10vf4gonjf2b5neochhqub1e678ymvh4344iul7n72caqghhhbbu9yn9j7nra3aynnyku6nxrstr6hlf1tg1gr5yz8diy1cg1074jbwgdwcf0fk2hj7t6ybp43pggwkhmt1myr058hkb7duh9ti20qb86hpqzu93wmcpx2nl7ubxeah6es018kuzbv8ibqvzb382shbzhrazglupnythbae7ny3ary2uicwptto4xy2ym87erasybdt3etv8z28igf6irrozwictcibls21rdkw0uowq70qobuggawxl39gsl8v2vrbn654bq7mwym5hkwn9aolxnc3bp627gnw4uip704vsosptpy38y9nfrank0is27vp87ss7h7c30ff5t03ez8rn8iji48gdu == \9\0\0\i\3\l\3\v\k\m\l\h\2\9\t\l\9\g\l\f\i\7\g\b\a\u\s\m\7\3\b\o\j\r\q\w\s\m\w\p\s\z\g\v\o\v\7\2\i\9\f\d\4\8\z\b\m\k\p\j\7\q\w\4\f\p\a\w\o\j\0\t\3\v\p\m\g\6\v\7\e\1\a\m\j\k\d\r\w\k\s\y\i\m\z\m\s\g\7\b\l\p\j\r\m\p\c\x\m\8\k\1\0\v\f\4\g\o\n\j\f\2\b\5\n\e\o\c\h\h\q\u\b\1\e\6\7\8\y\m\v\h\4\3\4\4\i\u\l\7\n\7\2\c\a\q\g\h\h\h\b\b\u\9\y\n\9\j\7\n\r\a\3\a\y\n\n\y\k\u\6\n\x\r\s\t\r\6\h\l\f\1\t\g\1\g\r\5\y\z\8\d\i\y\1\c\g\1\0\7\4\j\b\w\g\d\w\c\f\0\f\k\2\h\j\7\t\6\y\b\p\4\3\p\g\g\w\k\h\m\t\1\m\y\r\0\5\8\h\k\b\7\d\u\h\9\t\i\2\0\q\b\8\6\h\p\q\z\u\9\3\w\m\c\p\x\2\n\l\7\u\b\x\e\a\h\6\e\s\0\1\8\k\u\z\b\v\8\i\b\q\v\z\b\3\8\2\s\h\b\z\h\r\a\z\g\l\u\p\n\y\t\h\b\a\e\7\n\y\3\a\r\y\2\u\i\c\w\p\t\t\o\4\x\y\2\y\m\8\7\e\r\a\s\y\b\d\t\3\e\t\v\8\z\2\8\i\g\f\6\i\r\r\o\z\w\i\c\t\c\i\b\l\s\2\1\r\d\k\w\0\u\o\w\q\7\0\q\o\b\u\g\g\a\w\x\l\3\9\g\s\l\8\v\2\v\r\b\n\6\5\4\b\q\7\m\w\y\m\5\h\k\w\n\9\a\o\l\x\n\c\3\b\p\6\2\7\g\n\w\4\u\i\p\7\0\4\v\s\o\s\p\t\p\y\3\8\y\9\n\f\r\a\n\k\0\i\s\2\7\v\p\8\7\s\s\7\h\7\c\3\0\f\f\5\t\0\3\e\z\8\r\n\8\i\j\i\4\8\g\d\u ]] 00:27:20.403 10:52:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:20.403 10:52:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:20.662 [2024-07-24 10:52:47.101299] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:20.662 [2024-07-24 10:52:47.101569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145618 ] 00:27:20.662 [2024-07-24 10:52:47.248549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.662 [2024-07-24 10:52:47.311501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.179  Copying: 512/512 [B] (average 250 kBps) 00:27:21.179 00:27:21.179 10:52:47 -- dd/posix.sh@93 -- # [[ 900i3l3vkmlh29tl9glfi7gbausm73bojrqwsmwpszgvov72i9fd48zbmkpj7qw4fpawoj0t3vpmg6v7e1amjkdrwksyimzmsg7blpjrmpcxm8k10vf4gonjf2b5neochhqub1e678ymvh4344iul7n72caqghhhbbu9yn9j7nra3aynnyku6nxrstr6hlf1tg1gr5yz8diy1cg1074jbwgdwcf0fk2hj7t6ybp43pggwkhmt1myr058hkb7duh9ti20qb86hpqzu93wmcpx2nl7ubxeah6es018kuzbv8ibqvzb382shbzhrazglupnythbae7ny3ary2uicwptto4xy2ym87erasybdt3etv8z28igf6irrozwictcibls21rdkw0uowq70qobuggawxl39gsl8v2vrbn654bq7mwym5hkwn9aolxnc3bp627gnw4uip704vsosptpy38y9nfrank0is27vp87ss7h7c30ff5t03ez8rn8iji48gdu == \9\0\0\i\3\l\3\v\k\m\l\h\2\9\t\l\9\g\l\f\i\7\g\b\a\u\s\m\7\3\b\o\j\r\q\w\s\m\w\p\s\z\g\v\o\v\7\2\i\9\f\d\4\8\z\b\m\k\p\j\7\q\w\4\f\p\a\w\o\j\0\t\3\v\p\m\g\6\v\7\e\1\a\m\j\k\d\r\w\k\s\y\i\m\z\m\s\g\7\b\l\p\j\r\m\p\c\x\m\8\k\1\0\v\f\4\g\o\n\j\f\2\b\5\n\e\o\c\h\h\q\u\b\1\e\6\7\8\y\m\v\h\4\3\4\4\i\u\l\7\n\7\2\c\a\q\g\h\h\h\b\b\u\9\y\n\9\j\7\n\r\a\3\a\y\n\n\y\k\u\6\n\x\r\s\t\r\6\h\l\f\1\t\g\1\g\r\5\y\z\8\d\i\y\1\c\g\1\0\7\4\j\b\w\g\d\w\c\f\0\f\k\2\h\j\7\t\6\y\b\p\4\3\p\g\g\w\k\h\m\t\1\m\y\r\0\5\8\h\k\b\7\d\u\h\9\t\i\2\0\q\b\8\6\h\p\q\z\u\9\3\w\m\c\p\x\2\n\l\7\u\b\x\e\a\h\6\e\s\0\1\8\k\u\z\b\v\8\i\b\q\v\z\b\3\8\2\s\h\b\z\h\r\a\z\g\l\u\p\n\y\t\h\b\a\e\7\n\y\3\a\r\y\2\u\i\c\w\p\t\t\o\4\x\y\2\y\m\8\7\e\r\a\s\y\b\d\t\3\e\t\v\8\z\2\8\i\g\f\6\i\r\r\o\z\w\i\c\t\c\i\b\l\s\2\1\r\d\k\w\0\u\o\w\q\7\0\q\o\b\u\g\g\a\w\x\l\3\9\g\s\l\8\v\2\v\r\b\n\6\5\4\b\q\7\m\w\y\m\5\h\k\w\n\9\a\o\l\x\n\c\3\b\p\6\2\7\g\n\w\4\u\i\p\7\0\4\v\s\o\s\p\t\p\y\3\8\y\9\n\f\r\a\n\k\0\i\s\2\7\v\p\8\7\s\s\7\h\7\c\3\0\f\f\5\t\0\3\e\z\8\r\n\8\i\j\i\4\8\g\d\u ]] 00:27:21.179 00:27:21.179 real 0m5.204s 00:27:21.179 user 0m2.587s 00:27:21.179 sys 0m1.508s 00:27:21.179 10:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.179 10:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:21.179 ************************************ 00:27:21.179 END TEST dd_flags_misc_forced_aio 00:27:21.179 ************************************ 00:27:21.179 10:52:47 -- dd/posix.sh@1 -- # cleanup 00:27:21.179 10:52:47 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:21.179 10:52:47 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:21.179 00:27:21.179 real 0m23.515s 00:27:21.179 user 0m10.917s 00:27:21.179 sys 0m6.428s 00:27:21.179 10:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.179 10:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:21.179 ************************************ 00:27:21.179 END TEST spdk_dd_posix 00:27:21.179 ************************************ 00:27:21.179 10:52:47 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:27:21.179 10:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:21.179 10:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:21.179 10:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:21.179 ************************************ 00:27:21.179 START TEST spdk_dd_malloc 00:27:21.179 ************************************ 00:27:21.179 10:52:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:27:21.438 * Looking for test storage... 00:27:21.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:21.438 10:52:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:21.438 10:52:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.438 10:52:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.438 10:52:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.438 10:52:47 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:21.438 10:52:47 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:21.438 10:52:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:21.438 10:52:47 -- paths/export.sh@5 -- # export PATH 00:27:21.438 10:52:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:21.438 10:52:47 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:27:21.438 10:52:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:21.438 10:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:21.438 10:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:21.438 ************************************ 00:27:21.438 START TEST dd_malloc_copy 00:27:21.438 ************************************ 00:27:21.438 10:52:47 -- common/autotest_common.sh@1104 -- # malloc_copy 00:27:21.438 10:52:47 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:27:21.438 10:52:47 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:27:21.438 10:52:47 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:27:21.438 10:52:47 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:27:21.438 10:52:47 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:27:21.439 10:52:47 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:27:21.439 10:52:47 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:27:21.439 10:52:47 -- dd/malloc.sh@28 -- # gen_conf 00:27:21.439 10:52:47 -- dd/common.sh@31 -- # xtrace_disable 00:27:21.439 10:52:47 -- common/autotest_common.sh@10 -- # set +x 00:27:21.439 [2024-07-24 10:52:47.949597] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:21.439 [2024-07-24 10:52:47.949843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145693 ] 00:27:21.439 { 00:27:21.439 "subsystems": [ 00:27:21.439 { 00:27:21.439 "subsystem": "bdev", 00:27:21.439 "config": [ 00:27:21.439 { 00:27:21.439 "params": { 00:27:21.439 "block_size": 512, 00:27:21.439 "num_blocks": 1048576, 00:27:21.439 "name": "malloc0" 00:27:21.439 }, 00:27:21.439 "method": "bdev_malloc_create" 00:27:21.439 }, 00:27:21.439 { 00:27:21.439 "params": { 00:27:21.439 "block_size": 512, 00:27:21.439 "num_blocks": 1048576, 00:27:21.439 "name": "malloc1" 00:27:21.439 }, 00:27:21.439 "method": "bdev_malloc_create" 00:27:21.439 }, 00:27:21.439 { 00:27:21.439 "method": "bdev_wait_for_examine" 00:27:21.439 } 00:27:21.439 ] 00:27:21.439 } 00:27:21.439 ] 00:27:21.439 } 00:27:21.439 [2024-07-24 10:52:48.099100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.697 [2024-07-24 10:52:48.163463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.145  Copying: 194/512 [MB] (194 MBps) Copying: 385/512 [MB] (190 MBps) Copying: 512/512 [MB] (average 194 MBps) 00:27:25.145 00:27:25.403 10:52:51 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:27:25.403 10:52:51 -- dd/malloc.sh@33 -- # gen_conf 00:27:25.403 10:52:51 -- dd/common.sh@31 -- # xtrace_disable 00:27:25.403 10:52:51 -- common/autotest_common.sh@10 -- # set +x 00:27:25.403 [2024-07-24 10:52:51.891035] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:25.403 [2024-07-24 10:52:51.891290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145752 ] 00:27:25.403 { 00:27:25.403 "subsystems": [ 00:27:25.403 { 00:27:25.403 "subsystem": "bdev", 00:27:25.403 "config": [ 00:27:25.403 { 00:27:25.403 "params": { 00:27:25.403 "block_size": 512, 00:27:25.403 "num_blocks": 1048576, 00:27:25.403 "name": "malloc0" 00:27:25.403 }, 00:27:25.403 "method": "bdev_malloc_create" 00:27:25.403 }, 00:27:25.403 { 00:27:25.403 "params": { 00:27:25.403 "block_size": 512, 00:27:25.403 "num_blocks": 1048576, 00:27:25.403 "name": "malloc1" 00:27:25.403 }, 00:27:25.403 "method": "bdev_malloc_create" 00:27:25.403 }, 00:27:25.403 { 00:27:25.403 "method": "bdev_wait_for_examine" 00:27:25.403 } 00:27:25.403 ] 00:27:25.403 } 00:27:25.403 ] 00:27:25.403 } 00:27:25.403 [2024-07-24 10:52:52.039684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.662 [2024-07-24 10:52:52.108907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.150  Copying: 194/512 [MB] (194 MBps) Copying: 387/512 [MB] (192 MBps) Copying: 512/512 [MB] (average 192 MBps) 00:27:29.150 00:27:29.150 00:27:29.150 real 0m7.885s 00:27:29.150 user 0m6.775s 00:27:29.150 sys 0m0.971s 00:27:29.150 10:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.150 10:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:29.150 ************************************ 00:27:29.150 END TEST dd_malloc_copy 00:27:29.150 ************************************ 00:27:29.150 00:27:29.150 real 0m8.026s 00:27:29.150 user 0m6.856s 00:27:29.150 sys 0m1.038s 00:27:29.150 10:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.150 10:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:29.150 ************************************ 00:27:29.150 END TEST spdk_dd_malloc 00:27:29.150 ************************************ 00:27:29.409 10:52:55 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:27:29.409 10:52:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:29.409 10:52:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.409 10:52:55 -- common/autotest_common.sh@10 -- # set +x 00:27:29.409 ************************************ 00:27:29.409 START TEST spdk_dd_bdev_to_bdev 00:27:29.409 ************************************ 00:27:29.409 10:52:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:27:29.409 * Looking for test storage... 00:27:29.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:29.409 10:52:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:29.409 10:52:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.409 10:52:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.409 10:52:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.409 10:52:55 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.409 10:52:55 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.409 10:52:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.409 10:52:55 -- paths/export.sh@5 -- # export PATH 00:27:29.409 10:52:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:27:29.409 10:52:55 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:27:29.409 [2024-07-24 10:52:56.000981] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:29.409 [2024-07-24 10:52:56.001192] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145862 ] 00:27:29.668 [2024-07-24 10:52:56.142731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.668 [2024-07-24 10:52:56.216260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.185  Copying: 256/256 [MB] (average 1213 MBps) 00:27:30.185 00:27:30.185 10:52:56 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:30.185 10:52:56 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:30.185 10:52:56 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:27:30.185 10:52:56 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:27:30.185 10:52:56 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:30.185 10:52:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:30.185 10:52:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:30.185 10:52:56 -- common/autotest_common.sh@10 -- # set +x 00:27:30.185 ************************************ 00:27:30.185 START TEST dd_inflate_file 00:27:30.185 ************************************ 00:27:30.185 10:52:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:27:30.444 [2024-07-24 10:52:56.878464] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:30.444 [2024-07-24 10:52:56.878737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145885 ] 00:27:30.444 [2024-07-24 10:52:57.024725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.444 [2024-07-24 10:52:57.088261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.961  Copying: 64/64 [MB] (average 1207 MBps) 00:27:30.961 00:27:30.961 00:27:30.961 real 0m0.686s 00:27:30.961 user 0m0.318s 00:27:30.961 sys 0m0.239s 00:27:30.961 10:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:30.961 10:52:57 -- common/autotest_common.sh@10 -- # set +x 00:27:30.961 ************************************ 00:27:30.961 END TEST dd_inflate_file 00:27:30.961 ************************************ 00:27:30.961 10:52:57 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:27:30.961 10:52:57 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:27:30.961 10:52:57 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:30.961 10:52:57 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:27:30.961 10:52:57 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:27:30.961 10:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:30.961 10:52:57 -- dd/common.sh@31 -- # xtrace_disable 00:27:30.962 10:52:57 -- common/autotest_common.sh@10 -- # set +x 00:27:30.962 10:52:57 -- common/autotest_common.sh@10 -- # set +x 00:27:30.962 ************************************ 00:27:30.962 START TEST dd_copy_to_out_bdev 00:27:30.962 ************************************ 00:27:30.962 10:52:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:27:30.962 { 00:27:30.962 "subsystems": [ 00:27:30.962 { 00:27:30.962 "subsystem": "bdev", 00:27:30.962 "config": [ 00:27:30.962 { 00:27:30.962 "params": { 00:27:30.962 "block_size": 4096, 00:27:30.962 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:30.962 "name": "aio1" 00:27:30.962 }, 00:27:30.962 "method": "bdev_aio_create" 00:27:30.962 }, 00:27:30.962 { 00:27:30.962 "params": { 00:27:30.962 "trtype": "pcie", 00:27:30.962 "traddr": "0000:00:06.0", 00:27:30.962 "name": "Nvme0" 00:27:30.962 }, 00:27:30.962 "method": "bdev_nvme_attach_controller" 00:27:30.962 }, 00:27:30.962 { 00:27:30.962 "method": "bdev_wait_for_examine" 00:27:30.962 } 00:27:30.962 ] 00:27:30.962 } 00:27:30.962 ] 00:27:30.962 } 00:27:30.962 [2024-07-24 10:52:57.621967] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:30.962 [2024-07-24 10:52:57.622721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145933 ] 00:27:31.221 [2024-07-24 10:52:57.772107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.221 [2024-07-24 10:52:57.853297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.453  Copying: 43/64 [MB] (43 MBps) Copying: 64/64 [MB] (average 43 MBps) 00:27:33.453 00:27:33.453 00:27:33.454 real 0m2.276s 00:27:33.454 user 0m1.935s 00:27:33.454 sys 0m0.239s 00:27:33.454 10:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.454 ************************************ 00:27:33.454 END TEST dd_copy_to_out_bdev 00:27:33.454 ************************************ 00:27:33.454 10:52:59 -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:27:33.454 10:52:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:33.454 10:52:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.454 10:52:59 -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 ************************************ 00:27:33.454 START TEST dd_offset_magic 00:27:33.454 ************************************ 00:27:33.454 10:52:59 -- common/autotest_common.sh@1104 -- # offset_magic 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:27:33.454 10:52:59 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:33.454 10:52:59 -- dd/common.sh@31 -- # xtrace_disable 00:27:33.454 10:52:59 -- common/autotest_common.sh@10 -- # set +x 00:27:33.454 [2024-07-24 10:52:59.951997] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:33.454 [2024-07-24 10:52:59.952284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145978 ] 00:27:33.454 { 00:27:33.454 "subsystems": [ 00:27:33.454 { 00:27:33.454 "subsystem": "bdev", 00:27:33.454 "config": [ 00:27:33.454 { 00:27:33.454 "params": { 00:27:33.454 "block_size": 4096, 00:27:33.454 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:33.454 "name": "aio1" 00:27:33.454 }, 00:27:33.454 "method": "bdev_aio_create" 00:27:33.454 }, 00:27:33.454 { 00:27:33.454 "params": { 00:27:33.454 "trtype": "pcie", 00:27:33.454 "traddr": "0000:00:06.0", 00:27:33.454 "name": "Nvme0" 00:27:33.454 }, 00:27:33.454 "method": "bdev_nvme_attach_controller" 00:27:33.454 }, 00:27:33.454 { 00:27:33.454 "method": "bdev_wait_for_examine" 00:27:33.454 } 00:27:33.454 ] 00:27:33.454 } 00:27:33.454 ] 00:27:33.454 } 00:27:33.454 [2024-07-24 10:53:00.100389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.713 [2024-07-24 10:53:00.175437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.538  Copying: 65/65 [MB] (average 134 MBps) 00:27:34.538 00:27:34.538 10:53:01 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:34.538 10:53:01 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:27:34.538 10:53:01 -- dd/common.sh@31 -- # xtrace_disable 00:27:34.538 10:53:01 -- common/autotest_common.sh@10 -- # set +x 00:27:34.538 [2024-07-24 10:53:01.214630] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:34.538 [2024-07-24 10:53:01.214906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146006 ] 00:27:34.538 { 00:27:34.538 "subsystems": [ 00:27:34.538 { 00:27:34.538 "subsystem": "bdev", 00:27:34.538 "config": [ 00:27:34.538 { 00:27:34.538 "params": { 00:27:34.538 "block_size": 4096, 00:27:34.538 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:34.538 "name": "aio1" 00:27:34.538 }, 00:27:34.538 "method": "bdev_aio_create" 00:27:34.538 }, 00:27:34.538 { 00:27:34.538 "params": { 00:27:34.538 "trtype": "pcie", 00:27:34.538 "traddr": "0000:00:06.0", 00:27:34.538 "name": "Nvme0" 00:27:34.538 }, 00:27:34.538 "method": "bdev_nvme_attach_controller" 00:27:34.538 }, 00:27:34.538 { 00:27:34.538 "method": "bdev_wait_for_examine" 00:27:34.538 } 00:27:34.538 ] 00:27:34.538 } 00:27:34.538 ] 00:27:34.538 } 00:27:34.798 [2024-07-24 10:53:01.364037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.798 [2024-07-24 10:53:01.444783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.315  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:35.315 00:27:35.315 10:53:01 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:35.315 10:53:01 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:35.315 10:53:01 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:27:35.315 10:53:01 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:27:35.315 10:53:01 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:27:35.315 10:53:01 -- dd/common.sh@31 -- # xtrace_disable 00:27:35.315 10:53:01 -- common/autotest_common.sh@10 -- # set +x 00:27:35.574 [2024-07-24 10:53:02.044604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:35.574 [2024-07-24 10:53:02.044817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146028 ] 00:27:35.574 { 00:27:35.574 "subsystems": [ 00:27:35.574 { 00:27:35.574 "subsystem": "bdev", 00:27:35.574 "config": [ 00:27:35.574 { 00:27:35.574 "params": { 00:27:35.574 "block_size": 4096, 00:27:35.574 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:35.574 "name": "aio1" 00:27:35.574 }, 00:27:35.574 "method": "bdev_aio_create" 00:27:35.574 }, 00:27:35.574 { 00:27:35.574 "params": { 00:27:35.574 "trtype": "pcie", 00:27:35.574 "traddr": "0000:00:06.0", 00:27:35.574 "name": "Nvme0" 00:27:35.574 }, 00:27:35.574 "method": "bdev_nvme_attach_controller" 00:27:35.574 }, 00:27:35.574 { 00:27:35.574 "method": "bdev_wait_for_examine" 00:27:35.574 } 00:27:35.574 ] 00:27:35.574 } 00:27:35.574 ] 00:27:35.574 } 00:27:35.574 [2024-07-24 10:53:02.193790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.832 [2024-07-24 10:53:02.272065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.660  Copying: 65/65 [MB] (average 186 MBps) 00:27:36.660 00:27:36.660 10:53:03 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:27:36.660 10:53:03 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:27:36.660 10:53:03 -- dd/common.sh@31 -- # xtrace_disable 00:27:36.660 10:53:03 -- common/autotest_common.sh@10 -- # set +x 00:27:36.660 [2024-07-24 10:53:03.278044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:36.660 [2024-07-24 10:53:03.278268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146050 ] 00:27:36.660 { 00:27:36.660 "subsystems": [ 00:27:36.660 { 00:27:36.660 "subsystem": "bdev", 00:27:36.660 "config": [ 00:27:36.660 { 00:27:36.660 "params": { 00:27:36.660 "block_size": 4096, 00:27:36.660 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:36.660 "name": "aio1" 00:27:36.660 }, 00:27:36.660 "method": "bdev_aio_create" 00:27:36.660 }, 00:27:36.660 { 00:27:36.660 "params": { 00:27:36.660 "trtype": "pcie", 00:27:36.660 "traddr": "0000:00:06.0", 00:27:36.660 "name": "Nvme0" 00:27:36.660 }, 00:27:36.660 "method": "bdev_nvme_attach_controller" 00:27:36.660 }, 00:27:36.660 { 00:27:36.660 "method": "bdev_wait_for_examine" 00:27:36.660 } 00:27:36.660 ] 00:27:36.660 } 00:27:36.660 ] 00:27:36.660 } 00:27:36.918 [2024-07-24 10:53:03.426128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.918 [2024-07-24 10:53:03.548805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.744  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:37.744 00:27:37.744 10:53:04 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:27:37.744 10:53:04 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:27:37.744 00:27:37.744 real 0m4.238s 00:27:37.744 user 0m2.113s 00:27:37.744 sys 0m1.007s 00:27:37.744 10:53:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:37.744 10:53:04 -- common/autotest_common.sh@10 -- # set +x 00:27:37.744 ************************************ 00:27:37.744 END TEST dd_offset_magic 00:27:37.744 ************************************ 00:27:37.744 10:53:04 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:27:37.744 10:53:04 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:27:37.744 10:53:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:37.744 10:53:04 -- dd/common.sh@11 -- # local nvme_ref= 00:27:37.744 10:53:04 -- dd/common.sh@12 -- # local size=4194330 00:27:37.744 10:53:04 -- dd/common.sh@14 -- # local bs=1048576 00:27:37.744 10:53:04 -- dd/common.sh@15 -- # local count=5 00:27:37.744 10:53:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:27:37.744 10:53:04 -- dd/common.sh@18 -- # gen_conf 00:27:37.744 10:53:04 -- dd/common.sh@31 -- # xtrace_disable 00:27:37.744 10:53:04 -- common/autotest_common.sh@10 -- # set +x 00:27:37.744 [2024-07-24 10:53:04.230260] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:37.744 [2024-07-24 10:53:04.230462] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146086 ] 00:27:37.744 { 00:27:37.744 "subsystems": [ 00:27:37.744 { 00:27:37.744 "subsystem": "bdev", 00:27:37.744 "config": [ 00:27:37.744 { 00:27:37.744 "params": { 00:27:37.744 "block_size": 4096, 00:27:37.744 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:37.744 "name": "aio1" 00:27:37.744 }, 00:27:37.744 "method": "bdev_aio_create" 00:27:37.744 }, 00:27:37.744 { 00:27:37.744 "params": { 00:27:37.744 "trtype": "pcie", 00:27:37.744 "traddr": "0000:00:06.0", 00:27:37.744 "name": "Nvme0" 00:27:37.744 }, 00:27:37.744 "method": "bdev_nvme_attach_controller" 00:27:37.744 }, 00:27:37.744 { 00:27:37.744 "method": "bdev_wait_for_examine" 00:27:37.744 } 00:27:37.744 ] 00:27:37.744 } 00:27:37.744 ] 00:27:37.744 } 00:27:37.745 [2024-07-24 10:53:04.379981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.004 [2024-07-24 10:53:04.454988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.262  Copying: 5120/5120 [kB] (average 1000 MBps) 00:27:38.262 00:27:38.262 10:53:04 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:27:38.262 10:53:04 -- dd/common.sh@10 -- # local bdev=aio1 00:27:38.262 10:53:04 -- dd/common.sh@11 -- # local nvme_ref= 00:27:38.262 10:53:04 -- dd/common.sh@12 -- # local size=4194330 00:27:38.262 10:53:04 -- dd/common.sh@14 -- # local bs=1048576 00:27:38.262 10:53:04 -- dd/common.sh@15 -- # local count=5 00:27:38.262 10:53:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:27:38.262 10:53:04 -- dd/common.sh@18 -- # gen_conf 00:27:38.262 10:53:04 -- dd/common.sh@31 -- # xtrace_disable 00:27:38.262 10:53:04 -- common/autotest_common.sh@10 -- # set +x 00:27:38.521 [2024-07-24 10:53:04.991083] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:38.521 [2024-07-24 10:53:04.992074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146104 ] 00:27:38.521 { 00:27:38.521 "subsystems": [ 00:27:38.521 { 00:27:38.521 "subsystem": "bdev", 00:27:38.521 "config": [ 00:27:38.521 { 00:27:38.521 "params": { 00:27:38.521 "block_size": 4096, 00:27:38.521 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:38.522 "name": "aio1" 00:27:38.522 }, 00:27:38.522 "method": "bdev_aio_create" 00:27:38.522 }, 00:27:38.522 { 00:27:38.522 "params": { 00:27:38.522 "trtype": "pcie", 00:27:38.522 "traddr": "0000:00:06.0", 00:27:38.522 "name": "Nvme0" 00:27:38.522 }, 00:27:38.522 "method": "bdev_nvme_attach_controller" 00:27:38.522 }, 00:27:38.522 { 00:27:38.522 "method": "bdev_wait_for_examine" 00:27:38.522 } 00:27:38.522 ] 00:27:38.522 } 00:27:38.522 ] 00:27:38.522 } 00:27:38.522 [2024-07-24 10:53:05.139821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.781 [2024-07-24 10:53:05.214788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.348  Copying: 5120/5120 [kB] (average 238 MBps) 00:27:39.348 00:27:39.348 10:53:05 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:39.348 ************************************ 00:27:39.348 END TEST spdk_dd_bdev_to_bdev 00:27:39.348 ************************************ 00:27:39.348 00:27:39.348 real 0m9.969s 00:27:39.348 user 0m5.775s 00:27:39.348 sys 0m2.438s 00:27:39.348 10:53:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.348 10:53:05 -- common/autotest_common.sh@10 -- # set +x 00:27:39.348 10:53:05 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:27:39.348 10:53:05 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:39.348 10:53:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:39.348 10:53:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.348 10:53:05 -- common/autotest_common.sh@10 -- # set +x 00:27:39.348 ************************************ 00:27:39.348 START TEST spdk_dd_sparse 00:27:39.348 ************************************ 00:27:39.348 10:53:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:39.348 * Looking for test storage... 00:27:39.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:39.348 10:53:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:39.348 10:53:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.348 10:53:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.348 10:53:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.348 10:53:05 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.348 10:53:05 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.348 10:53:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.348 10:53:05 -- paths/export.sh@5 -- # export PATH 00:27:39.348 10:53:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:39.348 10:53:05 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:27:39.348 10:53:05 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:27:39.348 10:53:05 -- dd/sparse.sh@110 -- # file1=file_zero1 00:27:39.348 10:53:05 -- dd/sparse.sh@111 -- # file2=file_zero2 00:27:39.348 10:53:05 -- dd/sparse.sh@112 -- # file3=file_zero3 00:27:39.348 10:53:05 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:27:39.348 10:53:05 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:27:39.348 10:53:05 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:27:39.348 10:53:05 -- dd/sparse.sh@118 -- # prepare 00:27:39.348 10:53:05 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:27:39.348 10:53:05 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:27:39.348 1+0 records in 00:27:39.348 1+0 records out 00:27:39.348 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00924751 s, 454 MB/s 00:27:39.348 10:53:05 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:27:39.348 1+0 records in 00:27:39.348 1+0 records out 00:27:39.348 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00835106 s, 502 MB/s 00:27:39.348 10:53:06 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:27:39.348 1+0 records in 00:27:39.348 1+0 records out 00:27:39.348 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0109139 s, 384 MB/s 00:27:39.348 10:53:06 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:27:39.348 10:53:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:39.348 10:53:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.348 10:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:39.348 ************************************ 00:27:39.348 START TEST dd_sparse_file_to_file 00:27:39.348 ************************************ 00:27:39.348 10:53:06 -- common/autotest_common.sh@1104 -- # file_to_file 00:27:39.348 10:53:06 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:27:39.348 10:53:06 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:27:39.348 10:53:06 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:39.607 10:53:06 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:27:39.607 10:53:06 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:27:39.607 10:53:06 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:27:39.607 10:53:06 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:27:39.607 10:53:06 -- dd/sparse.sh@41 -- # gen_conf 00:27:39.607 10:53:06 -- dd/common.sh@31 -- # xtrace_disable 00:27:39.607 10:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:39.607 [2024-07-24 10:53:06.079235] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:39.607 [2024-07-24 10:53:06.079488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146178 ] 00:27:39.607 { 00:27:39.607 "subsystems": [ 00:27:39.607 { 00:27:39.607 "subsystem": "bdev", 00:27:39.607 "config": [ 00:27:39.607 { 00:27:39.607 "params": { 00:27:39.607 "block_size": 4096, 00:27:39.607 "filename": "dd_sparse_aio_disk", 00:27:39.607 "name": "dd_aio" 00:27:39.607 }, 00:27:39.607 "method": "bdev_aio_create" 00:27:39.607 }, 00:27:39.607 { 00:27:39.607 "params": { 00:27:39.607 "lvs_name": "dd_lvstore", 00:27:39.607 "bdev_name": "dd_aio" 00:27:39.607 }, 00:27:39.607 "method": "bdev_lvol_create_lvstore" 00:27:39.607 }, 00:27:39.607 { 00:27:39.607 "method": "bdev_wait_for_examine" 00:27:39.607 } 00:27:39.607 ] 00:27:39.607 } 00:27:39.607 ] 00:27:39.607 } 00:27:39.607 [2024-07-24 10:53:06.219412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.866 [2024-07-24 10:53:06.299035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.125  Copying: 12/36 [MB] (average 923 MBps) 00:27:40.125 00:27:40.384 10:53:06 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:27:40.384 10:53:06 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:27:40.384 10:53:06 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:27:40.384 10:53:06 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:27:40.384 10:53:06 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:40.384 10:53:06 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:27:40.384 10:53:06 -- dd/sparse.sh@52 -- # stat1_b=24576 00:27:40.384 10:53:06 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:27:40.384 10:53:06 -- dd/sparse.sh@53 -- # stat2_b=24576 00:27:40.384 10:53:06 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:40.384 00:27:40.384 real 0m0.810s 00:27:40.384 user 0m0.441s 00:27:40.385 sys 0m0.215s 00:27:40.385 10:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.385 ************************************ 00:27:40.385 END TEST dd_sparse_file_to_file 00:27:40.385 ************************************ 00:27:40.385 10:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:40.385 10:53:06 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:27:40.385 10:53:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:40.385 10:53:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:40.385 10:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:40.385 ************************************ 00:27:40.385 START TEST dd_sparse_file_to_bdev 00:27:40.385 ************************************ 00:27:40.385 10:53:06 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:27:40.385 10:53:06 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:40.385 10:53:06 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:27:40.385 10:53:06 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:27:40.385 10:53:06 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:27:40.385 10:53:06 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:27:40.385 10:53:06 -- dd/sparse.sh@73 -- # gen_conf 00:27:40.385 10:53:06 -- dd/common.sh@31 -- # xtrace_disable 00:27:40.385 10:53:06 -- common/autotest_common.sh@10 -- # set +x 00:27:40.385 [2024-07-24 10:53:06.949317] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:40.385 [2024-07-24 10:53:06.949588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146231 ] 00:27:40.385 { 00:27:40.385 "subsystems": [ 00:27:40.385 { 00:27:40.385 "subsystem": "bdev", 00:27:40.385 "config": [ 00:27:40.385 { 00:27:40.385 "params": { 00:27:40.385 "block_size": 4096, 00:27:40.385 "filename": "dd_sparse_aio_disk", 00:27:40.385 "name": "dd_aio" 00:27:40.385 }, 00:27:40.385 "method": "bdev_aio_create" 00:27:40.385 }, 00:27:40.385 { 00:27:40.385 "params": { 00:27:40.385 "lvs_name": "dd_lvstore", 00:27:40.385 "lvol_name": "dd_lvol", 00:27:40.385 "size": 37748736, 00:27:40.385 "thin_provision": true 00:27:40.385 }, 00:27:40.385 "method": "bdev_lvol_create" 00:27:40.385 }, 00:27:40.385 { 00:27:40.385 "method": "bdev_wait_for_examine" 00:27:40.385 } 00:27:40.385 ] 00:27:40.385 } 00:27:40.385 ] 00:27:40.385 } 00:27:40.643 [2024-07-24 10:53:07.097846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.643 [2024-07-24 10:53:07.176629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.643 [2024-07-24 10:53:07.274697] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:27:40.644  Copying: 12/36 [MB] (average 545 MBps)[2024-07-24 10:53:07.316389] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:27:41.211 00:27:41.211 00:27:41.211 00:27:41.211 real 0m0.780s 00:27:41.211 user 0m0.410s 00:27:41.211 sys 0m0.246s 00:27:41.211 10:53:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.211 10:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:41.211 ************************************ 00:27:41.211 END TEST dd_sparse_file_to_bdev 00:27:41.211 ************************************ 00:27:41.211 10:53:07 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:41.211 10:53:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:41.211 10:53:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.211 10:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:41.211 ************************************ 00:27:41.212 START TEST dd_sparse_bdev_to_file 00:27:41.212 ************************************ 00:27:41.212 10:53:07 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:27:41.212 10:53:07 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:41.212 10:53:07 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:41.212 10:53:07 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:41.212 10:53:07 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:41.212 10:53:07 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:41.212 10:53:07 -- dd/sparse.sh@91 -- # gen_conf 00:27:41.212 10:53:07 -- dd/common.sh@31 -- # xtrace_disable 00:27:41.212 10:53:07 -- common/autotest_common.sh@10 -- # set +x 00:27:41.212 [2024-07-24 10:53:07.786639] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:41.212 [2024-07-24 10:53:07.787005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146269 ] 00:27:41.212 { 00:27:41.212 "subsystems": [ 00:27:41.212 { 00:27:41.212 "subsystem": "bdev", 00:27:41.212 "config": [ 00:27:41.212 { 00:27:41.212 "params": { 00:27:41.212 "block_size": 4096, 00:27:41.212 "filename": "dd_sparse_aio_disk", 00:27:41.212 "name": "dd_aio" 00:27:41.212 }, 00:27:41.212 "method": "bdev_aio_create" 00:27:41.212 }, 00:27:41.212 { 00:27:41.212 "method": "bdev_wait_for_examine" 00:27:41.212 } 00:27:41.212 ] 00:27:41.212 } 00:27:41.212 ] 00:27:41.212 } 00:27:41.471 [2024-07-24 10:53:07.925275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.471 [2024-07-24 10:53:08.000891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.040  Copying: 12/36 [MB] (average 1000 MBps) 00:27:42.040 00:27:42.040 10:53:08 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:42.040 10:53:08 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:42.040 10:53:08 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:42.040 10:53:08 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:42.040 10:53:08 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:42.040 10:53:08 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:42.040 10:53:08 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:42.040 10:53:08 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:42.040 10:53:08 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:42.040 10:53:08 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:42.040 00:27:42.040 real 0m0.760s 00:27:42.040 user 0m0.432s 00:27:42.040 sys 0m0.207s 00:27:42.040 10:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.040 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.040 ************************************ 00:27:42.040 END TEST dd_sparse_bdev_to_file 00:27:42.040 ************************************ 00:27:42.040 10:53:08 -- dd/sparse.sh@1 -- # cleanup 00:27:42.040 10:53:08 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:42.040 10:53:08 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:42.040 10:53:08 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:42.040 10:53:08 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:42.040 ************************************ 00:27:42.040 END TEST spdk_dd_sparse 00:27:42.040 ************************************ 00:27:42.040 00:27:42.040 real 0m2.659s 00:27:42.040 user 0m1.405s 00:27:42.040 sys 0m0.851s 00:27:42.040 10:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.040 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.040 10:53:08 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:42.040 10:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.040 10:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.040 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.040 ************************************ 00:27:42.040 START TEST spdk_dd_negative 00:27:42.040 ************************************ 00:27:42.040 10:53:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:42.040 * Looking for test storage... 00:27:42.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:42.040 10:53:08 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:42.040 10:53:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.040 10:53:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.040 10:53:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.040 10:53:08 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:42.040 10:53:08 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:42.040 10:53:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:42.040 10:53:08 -- paths/export.sh@5 -- # export PATH 00:27:42.040 10:53:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:42.040 10:53:08 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:42.040 10:53:08 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:42.040 10:53:08 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:42.040 10:53:08 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:42.040 10:53:08 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:42.040 10:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.040 10:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.040 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.040 ************************************ 00:27:42.040 START TEST dd_invalid_arguments 00:27:42.040 ************************************ 00:27:42.040 10:53:08 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:27:42.040 10:53:08 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:42.040 10:53:08 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.040 10:53:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:42.040 10:53:08 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.040 10:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.040 10:53:08 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.040 10:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.040 10:53:08 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.040 10:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.040 10:53:08 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.041 10:53:08 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.041 10:53:08 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:42.300 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:42.300 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:42.300 options: 00:27:42.300 -c, --config JSON config file (default none) 00:27:42.300 --json JSON config file (default none) 00:27:42.300 --json-ignore-init-errors 00:27:42.300 don't exit on invalid config entry 00:27:42.300 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:42.300 -g, --single-file-segments 00:27:42.300 force creating just one hugetlbfs file 00:27:42.300 -h, --help show this usage 00:27:42.300 -i, --shm-id shared memory ID (optional) 00:27:42.300 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:42.300 --lcores lcore to CPU mapping list. The list is in the format: 00:27:42.300 [<,lcores[@CPUs]>...] 00:27:42.300 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:42.300 Within the group, '-' is used for range separator, 00:27:42.300 ',' is used for single number separator. 00:27:42.300 '( )' can be omitted for single element group, 00:27:42.300 '@' can be omitted if cpus and lcores have the same value 00:27:42.300 -n, --mem-channels channel number of memory channels used for DPDK 00:27:42.300 -p, --main-core main (primary) core for DPDK 00:27:42.300 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:42.300 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:42.300 --disable-cpumask-locks Disable CPU core lock files. 00:27:42.300 --silence-noticelog disable notice level logging to stderr 00:27:42.300 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:42.300 -u, --no-pci disable PCI access 00:27:42.300 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:42.300 --max-delay maximum reactor delay (in microseconds) 00:27:42.300 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:42.300 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:42.300 -R, --huge-unlink unlink huge files after initialization 00:27:42.300 -v, --version print SPDK version 00:27:42.300 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:42.300 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:42.300 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:42.300 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:42.300 Tracepoints vary in size and can use more than one trace entry. 00:27:42.300 --rpcs-allowed comma-separated list of permitted RPCS 00:27:42.300 --env-context Opaque context for use of the env implementation 00:27:42.300 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:42.300 --no-huge run without using hugepages 00:27:42.300 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:42.300 -e, --tpoint-group [:] 00:27:42.300 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:42.300 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:42.301 Groups and [2024-07-24 10:53:08.764667] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:42.301 masks can be combined (e.g. thread,bdev:0x1). 00:27:42.301 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:42.301 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:42.301 [--------- DD Options ---------] 00:27:42.301 --if Input file. Must specify either --if or --ib. 00:27:42.301 --ib Input bdev. Must specifier either --if or --ib 00:27:42.301 --of Output file. Must specify either --of or --ob. 00:27:42.301 --ob Output bdev. Must specify either --of or --ob. 00:27:42.301 --iflag Input file flags. 00:27:42.301 --oflag Output file flags. 00:27:42.301 --bs I/O unit size (default: 4096) 00:27:42.301 --qd Queue depth (default: 2) 00:27:42.301 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:42.301 --skip Skip this many I/O units at start of input. (default: 0) 00:27:42.301 --seek Skip this many I/O units at start of output. (default: 0) 00:27:42.301 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:42.301 --sparse Enable hole skipping in input target 00:27:42.301 Available iflag and oflag values: 00:27:42.301 append - append mode 00:27:42.301 direct - use direct I/O for data 00:27:42.301 directory - fail unless a directory 00:27:42.301 dsync - use synchronized I/O for data 00:27:42.301 noatime - do not update access time 00:27:42.301 noctty - do not assign controlling terminal from file 00:27:42.301 nofollow - do not follow symlinks 00:27:42.301 nonblock - use non-blocking I/O 00:27:42.301 sync - use synchronized I/O for data and metadata 00:27:42.301 10:53:08 -- common/autotest_common.sh@643 -- # es=2 00:27:42.301 10:53:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.301 10:53:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.301 10:53:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.301 00:27:42.301 real 0m0.097s 00:27:42.301 user 0m0.047s 00:27:42.301 sys 0m0.050s 00:27:42.301 10:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.301 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.301 ************************************ 00:27:42.301 END TEST dd_invalid_arguments 00:27:42.301 ************************************ 00:27:42.301 10:53:08 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:42.301 10:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.301 10:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.301 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.301 ************************************ 00:27:42.301 START TEST dd_double_input 00:27:42.301 ************************************ 00:27:42.301 10:53:08 -- common/autotest_common.sh@1104 -- # double_input 00:27:42.301 10:53:08 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:42.301 10:53:08 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.301 10:53:08 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:42.301 10:53:08 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.301 10:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.301 10:53:08 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.301 10:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.301 10:53:08 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.301 10:53:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.301 10:53:08 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.301 10:53:08 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.301 10:53:08 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:42.301 [2024-07-24 10:53:08.910216] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:42.301 10:53:08 -- common/autotest_common.sh@643 -- # es=22 00:27:42.301 10:53:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.301 10:53:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.301 10:53:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.301 00:27:42.301 real 0m0.094s 00:27:42.301 user 0m0.048s 00:27:42.301 sys 0m0.047s 00:27:42.301 10:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.301 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.301 ************************************ 00:27:42.301 END TEST dd_double_input 00:27:42.301 ************************************ 00:27:42.560 10:53:08 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:42.560 10:53:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.560 10:53:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.560 10:53:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.560 ************************************ 00:27:42.560 START TEST dd_double_output 00:27:42.560 ************************************ 00:27:42.560 10:53:09 -- common/autotest_common.sh@1104 -- # double_output 00:27:42.560 10:53:09 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:42.560 10:53:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.560 10:53:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:42.560 10:53:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.560 10:53:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.560 10:53:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.560 10:53:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:42.560 [2024-07-24 10:53:09.058934] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:42.560 10:53:09 -- common/autotest_common.sh@643 -- # es=22 00:27:42.560 10:53:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.560 10:53:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.560 10:53:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.560 00:27:42.560 real 0m0.096s 00:27:42.560 user 0m0.038s 00:27:42.560 sys 0m0.058s 00:27:42.560 10:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.560 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:42.560 ************************************ 00:27:42.560 END TEST dd_double_output 00:27:42.560 ************************************ 00:27:42.560 10:53:09 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:42.560 10:53:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.560 10:53:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.560 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:42.560 ************************************ 00:27:42.560 START TEST dd_no_input 00:27:42.560 ************************************ 00:27:42.560 10:53:09 -- common/autotest_common.sh@1104 -- # no_input 00:27:42.560 10:53:09 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:42.560 10:53:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.560 10:53:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:42.560 10:53:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.560 10:53:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.560 10:53:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.560 10:53:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.560 10:53:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:42.560 [2024-07-24 10:53:09.208681] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:42.820 10:53:09 -- common/autotest_common.sh@643 -- # es=22 00:27:42.820 10:53:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.820 10:53:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.820 10:53:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.820 00:27:42.820 real 0m0.098s 00:27:42.820 user 0m0.058s 00:27:42.820 sys 0m0.040s 00:27:42.820 10:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.820 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:42.820 ************************************ 00:27:42.820 END TEST dd_no_input 00:27:42.820 ************************************ 00:27:42.820 10:53:09 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:42.820 10:53:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.820 10:53:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.820 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:42.820 ************************************ 00:27:42.820 START TEST dd_no_output 00:27:42.820 ************************************ 00:27:42.820 10:53:09 -- common/autotest_common.sh@1104 -- # no_output 00:27:42.820 10:53:09 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:42.820 10:53:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.820 10:53:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:42.820 10:53:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.820 10:53:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.820 10:53:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.820 10:53:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:42.820 [2024-07-24 10:53:09.354670] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:42.820 10:53:09 -- common/autotest_common.sh@643 -- # es=22 00:27:42.820 10:53:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:42.820 10:53:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:42.820 10:53:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:42.820 00:27:42.820 real 0m0.097s 00:27:42.820 user 0m0.046s 00:27:42.820 sys 0m0.051s 00:27:42.820 10:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.820 ************************************ 00:27:42.820 END TEST dd_no_output 00:27:42.820 ************************************ 00:27:42.820 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:42.820 10:53:09 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:42.820 10:53:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.820 10:53:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.820 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:42.820 ************************************ 00:27:42.820 START TEST dd_wrong_blocksize 00:27:42.820 ************************************ 00:27:42.820 10:53:09 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:27:42.820 10:53:09 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:42.820 10:53:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:42.820 10:53:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:42.820 10:53:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.820 10:53:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:42.820 10:53:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:42.820 10:53:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:42.820 10:53:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:42.820 [2024-07-24 10:53:09.496931] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:43.080 10:53:09 -- common/autotest_common.sh@643 -- # es=22 00:27:43.080 10:53:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:43.080 10:53:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:43.080 10:53:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:43.080 00:27:43.080 real 0m0.098s 00:27:43.080 user 0m0.051s 00:27:43.080 sys 0m0.047s 00:27:43.080 10:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.080 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:43.080 ************************************ 00:27:43.080 END TEST dd_wrong_blocksize 00:27:43.080 ************************************ 00:27:43.080 10:53:09 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:43.080 10:53:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:43.080 10:53:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:43.080 10:53:09 -- common/autotest_common.sh@10 -- # set +x 00:27:43.080 ************************************ 00:27:43.080 START TEST dd_smaller_blocksize 00:27:43.080 ************************************ 00:27:43.080 10:53:09 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:27:43.080 10:53:09 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:43.080 10:53:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:43.080 10:53:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:43.080 10:53:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.080 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.080 10:53:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.080 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.080 10:53:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.080 10:53:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.080 10:53:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.080 10:53:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:43.080 10:53:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:43.080 [2024-07-24 10:53:09.654037] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:43.080 [2024-07-24 10:53:09.654302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146528 ] 00:27:43.339 [2024-07-24 10:53:09.804280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.339 [2024-07-24 10:53:09.880894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.339 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:43.666 [2024-07-24 10:53:10.058783] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:43.667 [2024-07-24 10:53:10.059182] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:43.667 [2024-07-24 10:53:10.193073] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:43.667 10:53:10 -- common/autotest_common.sh@643 -- # es=244 00:27:43.667 10:53:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:43.667 10:53:10 -- common/autotest_common.sh@652 -- # es=116 00:27:43.667 10:53:10 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:43.667 10:53:10 -- common/autotest_common.sh@660 -- # es=1 00:27:43.667 10:53:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:43.667 00:27:43.667 real 0m0.706s 00:27:43.667 user 0m0.308s 00:27:43.667 sys 0m0.296s 00:27:43.667 10:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.667 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:43.667 ************************************ 00:27:43.667 END TEST dd_smaller_blocksize 00:27:43.667 ************************************ 00:27:43.667 10:53:10 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:43.667 10:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:43.667 10:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:43.667 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 ************************************ 00:27:43.926 START TEST dd_invalid_count 00:27:43.926 ************************************ 00:27:43.926 10:53:10 -- common/autotest_common.sh@1104 -- # invalid_count 00:27:43.926 10:53:10 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:43.926 10:53:10 -- common/autotest_common.sh@640 -- # local es=0 00:27:43.926 10:53:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:43.926 10:53:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.926 10:53:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.926 10:53:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:43.926 10:53:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:43.926 [2024-07-24 10:53:10.405382] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:43.926 10:53:10 -- common/autotest_common.sh@643 -- # es=22 00:27:43.926 10:53:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:43.926 10:53:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:43.926 10:53:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:43.926 00:27:43.926 real 0m0.088s 00:27:43.926 user 0m0.049s 00:27:43.926 sys 0m0.033s 00:27:43.926 10:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.926 ************************************ 00:27:43.926 END TEST dd_invalid_count 00:27:43.926 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 ************************************ 00:27:43.926 10:53:10 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:43.926 10:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:43.926 10:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:43.926 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 ************************************ 00:27:43.926 START TEST dd_invalid_oflag 00:27:43.926 ************************************ 00:27:43.926 10:53:10 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:27:43.926 10:53:10 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:43.926 10:53:10 -- common/autotest_common.sh@640 -- # local es=0 00:27:43.926 10:53:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:43.926 10:53:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.926 10:53:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:43.926 10:53:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.926 10:53:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:43.926 10:53:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:43.926 [2024-07-24 10:53:10.549208] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:43.926 10:53:10 -- common/autotest_common.sh@643 -- # es=22 00:27:43.926 10:53:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:43.926 10:53:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:43.926 10:53:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:43.926 00:27:43.926 real 0m0.102s 00:27:43.926 user 0m0.056s 00:27:43.926 sys 0m0.047s 00:27:43.926 10:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.926 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:43.926 ************************************ 00:27:43.926 END TEST dd_invalid_oflag 00:27:43.926 ************************************ 00:27:44.186 10:53:10 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:44.186 10:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:44.186 10:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.186 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:44.186 ************************************ 00:27:44.186 START TEST dd_invalid_iflag 00:27:44.186 ************************************ 00:27:44.186 10:53:10 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:27:44.186 10:53:10 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:44.186 10:53:10 -- common/autotest_common.sh@640 -- # local es=0 00:27:44.186 10:53:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:44.186 10:53:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.186 10:53:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.186 10:53:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:44.186 10:53:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:44.186 [2024-07-24 10:53:10.692137] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:44.186 10:53:10 -- common/autotest_common.sh@643 -- # es=22 00:27:44.186 10:53:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:44.186 10:53:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:44.186 10:53:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:44.186 00:27:44.186 real 0m0.085s 00:27:44.186 user 0m0.043s 00:27:44.186 sys 0m0.043s 00:27:44.186 10:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.186 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:44.186 ************************************ 00:27:44.186 END TEST dd_invalid_iflag 00:27:44.186 ************************************ 00:27:44.186 10:53:10 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:44.186 10:53:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:44.186 10:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.186 10:53:10 -- common/autotest_common.sh@10 -- # set +x 00:27:44.186 ************************************ 00:27:44.186 START TEST dd_unknown_flag 00:27:44.186 ************************************ 00:27:44.186 10:53:10 -- common/autotest_common.sh@1104 -- # unknown_flag 00:27:44.186 10:53:10 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:44.186 10:53:10 -- common/autotest_common.sh@640 -- # local es=0 00:27:44.186 10:53:10 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:44.186 10:53:10 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.186 10:53:10 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.186 10:53:10 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.186 10:53:10 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:44.186 10:53:10 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:44.186 [2024-07-24 10:53:10.836607] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:44.186 [2024-07-24 10:53:10.836890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146643 ] 00:27:44.445 [2024-07-24 10:53:10.984369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.445 [2024-07-24 10:53:11.054147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.704 [2024-07-24 10:53:11.147146] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:44.704 [2024-07-24 10:53:11.147551] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:44.704 [2024-07-24 10:53:11.147708] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:44.704 [2024-07-24 10:53:11.147827] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:44.704 [2024-07-24 10:53:11.270167] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:44.704 10:53:11 -- common/autotest_common.sh@643 -- # es=236 00:27:44.704 10:53:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:44.704 10:53:11 -- common/autotest_common.sh@652 -- # es=108 00:27:44.704 10:53:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:44.704 10:53:11 -- common/autotest_common.sh@660 -- # es=1 00:27:44.704 10:53:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:44.704 00:27:44.704 real 0m0.602s 00:27:44.704 user 0m0.310s 00:27:44.704 sys 0m0.192s 00:27:44.704 10:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.704 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:44.704 ************************************ 00:27:44.704 END TEST dd_unknown_flag 00:27:44.704 ************************************ 00:27:44.963 10:53:11 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:44.963 10:53:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:44.963 10:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:44.963 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:44.963 ************************************ 00:27:44.963 START TEST dd_invalid_json 00:27:44.963 ************************************ 00:27:44.963 10:53:11 -- common/autotest_common.sh@1104 -- # invalid_json 00:27:44.963 10:53:11 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:44.963 10:53:11 -- dd/negative_dd.sh@95 -- # : 00:27:44.963 10:53:11 -- common/autotest_common.sh@640 -- # local es=0 00:27:44.963 10:53:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:44.963 10:53:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.963 10:53:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.963 10:53:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.963 10:53:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.963 10:53:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.963 10:53:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:44.963 10:53:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:44.963 10:53:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:44.963 10:53:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:44.963 [2024-07-24 10:53:11.488396] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:44.963 [2024-07-24 10:53:11.488593] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146679 ] 00:27:44.963 [2024-07-24 10:53:11.628909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.222 [2024-07-24 10:53:11.707086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.222 [2024-07-24 10:53:11.707505] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:45.222 [2024-07-24 10:53:11.707713] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:45.222 [2024-07-24 10:53:11.707958] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:45.222 10:53:11 -- common/autotest_common.sh@643 -- # es=234 00:27:45.222 10:53:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:45.222 10:53:11 -- common/autotest_common.sh@652 -- # es=106 00:27:45.222 10:53:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:45.222 10:53:11 -- common/autotest_common.sh@660 -- # es=1 00:27:45.222 10:53:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:45.222 00:27:45.222 real 0m0.375s 00:27:45.222 user 0m0.174s 00:27:45.222 sys 0m0.101s 00:27:45.222 10:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.222 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:45.222 ************************************ 00:27:45.222 END TEST dd_invalid_json 00:27:45.222 ************************************ 00:27:45.222 ************************************ 00:27:45.222 END TEST spdk_dd_negative 00:27:45.222 ************************************ 00:27:45.222 00:27:45.222 real 0m3.244s 00:27:45.222 user 0m1.598s 00:27:45.222 sys 0m1.315s 00:27:45.222 10:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.222 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:45.222 00:27:45.222 real 1m9.379s 00:27:45.222 user 0m40.405s 00:27:45.222 sys 0m18.493s 00:27:45.222 10:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.222 ************************************ 00:27:45.222 END TEST spdk_dd 00:27:45.222 ************************************ 00:27:45.222 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 10:53:11 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:27:45.481 10:53:11 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:45.481 10:53:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:45.481 10:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.481 10:53:11 -- common/autotest_common.sh@10 -- # set +x 00:27:45.481 ************************************ 00:27:45.481 START TEST blockdev_nvme 00:27:45.481 ************************************ 00:27:45.481 10:53:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:45.481 * Looking for test storage... 00:27:45.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:45.481 10:53:12 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:45.481 10:53:12 -- bdev/nbd_common.sh@6 -- # set -e 00:27:45.481 10:53:12 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:45.481 10:53:12 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:45.481 10:53:12 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:45.481 10:53:12 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:45.481 10:53:12 -- bdev/blockdev.sh@18 -- # : 00:27:45.481 10:53:12 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:45.481 10:53:12 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:45.481 10:53:12 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:45.481 10:53:12 -- bdev/blockdev.sh@672 -- # uname -s 00:27:45.481 10:53:12 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:45.481 10:53:12 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:45.481 10:53:12 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:45.481 10:53:12 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:45.481 10:53:12 -- bdev/blockdev.sh@682 -- # dek= 00:27:45.481 10:53:12 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:45.481 10:53:12 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:45.481 10:53:12 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:45.481 10:53:12 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:45.481 10:53:12 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:45.481 10:53:12 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:45.481 10:53:12 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=146764 00:27:45.481 10:53:12 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:45.481 10:53:12 -- bdev/blockdev.sh@47 -- # waitforlisten 146764 00:27:45.482 10:53:12 -- common/autotest_common.sh@819 -- # '[' -z 146764 ']' 00:27:45.482 10:53:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.482 10:53:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:45.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.482 10:53:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.482 10:53:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:45.482 10:53:12 -- common/autotest_common.sh@10 -- # set +x 00:27:45.482 10:53:12 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:45.482 [2024-07-24 10:53:12.084560] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:45.482 [2024-07-24 10:53:12.085136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146764 ] 00:27:45.741 [2024-07-24 10:53:12.232216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.741 [2024-07-24 10:53:12.308271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:45.741 [2024-07-24 10:53:12.308553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.678 10:53:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:46.678 10:53:13 -- common/autotest_common.sh@852 -- # return 0 00:27:46.678 10:53:13 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:46.678 10:53:13 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:46.678 10:53:13 -- bdev/blockdev.sh@79 -- # local json 00:27:46.678 10:53:13 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:46.678 10:53:13 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:46.678 10:53:13 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:46.678 10:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.678 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:46.678 10:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.678 10:53:13 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:46.678 10:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.678 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:46.678 10:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.678 10:53:13 -- bdev/blockdev.sh@738 -- # cat 00:27:46.678 10:53:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:46.678 10:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.678 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:46.678 10:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.678 10:53:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:46.678 10:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.678 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:46.678 10:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.678 10:53:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:46.678 10:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.678 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:46.678 10:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.678 10:53:13 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:46.678 10:53:13 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:46.678 10:53:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.678 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:46.678 10:53:13 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:46.678 10:53:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.678 10:53:13 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:46.678 10:53:13 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fb52305b-404a-4dd1-8b91-b60a3e8a9ef6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fb52305b-404a-4dd1-8b91-b60a3e8a9ef6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:46.678 10:53:13 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:46.678 10:53:13 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:46.678 10:53:13 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:46.678 10:53:13 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:46.678 10:53:13 -- bdev/blockdev.sh@752 -- # killprocess 146764 00:27:46.678 10:53:13 -- common/autotest_common.sh@926 -- # '[' -z 146764 ']' 00:27:46.678 10:53:13 -- common/autotest_common.sh@930 -- # kill -0 146764 00:27:46.678 10:53:13 -- common/autotest_common.sh@931 -- # uname 00:27:46.678 10:53:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:46.679 10:53:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146764 00:27:46.679 10:53:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:46.679 10:53:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:46.679 killing process with pid 146764 00:27:46.679 10:53:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146764' 00:27:46.679 10:53:13 -- common/autotest_common.sh@945 -- # kill 146764 00:27:46.679 10:53:13 -- common/autotest_common.sh@950 -- # wait 146764 00:27:47.248 10:53:13 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:47.248 10:53:13 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:47.248 10:53:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:47.248 10:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.248 10:53:13 -- common/autotest_common.sh@10 -- # set +x 00:27:47.248 ************************************ 00:27:47.248 START TEST bdev_hello_world 00:27:47.248 ************************************ 00:27:47.248 10:53:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:47.248 [2024-07-24 10:53:13.851744] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:47.248 [2024-07-24 10:53:13.852001] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146831 ] 00:27:47.550 [2024-07-24 10:53:13.998943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.550 [2024-07-24 10:53:14.070910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.810 [2024-07-24 10:53:14.288704] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:47.810 [2024-07-24 10:53:14.288803] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:27:47.810 [2024-07-24 10:53:14.288902] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:47.810 [2024-07-24 10:53:14.291471] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:47.810 [2024-07-24 10:53:14.292084] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:47.810 [2024-07-24 10:53:14.292147] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:47.810 [2024-07-24 10:53:14.292474] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:47.810 00:27:47.810 [2024-07-24 10:53:14.292546] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:48.070 00:27:48.070 real 0m0.746s 00:27:48.070 user 0m0.455s 00:27:48.070 sys 0m0.192s 00:27:48.070 10:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.070 10:53:14 -- common/autotest_common.sh@10 -- # set +x 00:27:48.070 ************************************ 00:27:48.070 END TEST bdev_hello_world 00:27:48.070 ************************************ 00:27:48.070 10:53:14 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:48.070 10:53:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:48.070 10:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:48.070 10:53:14 -- common/autotest_common.sh@10 -- # set +x 00:27:48.070 ************************************ 00:27:48.070 START TEST bdev_bounds 00:27:48.070 ************************************ 00:27:48.070 10:53:14 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:27:48.070 10:53:14 -- bdev/blockdev.sh@288 -- # bdevio_pid=146869 00:27:48.070 10:53:14 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:48.070 Process bdevio pid: 146869 00:27:48.070 10:53:14 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 146869' 00:27:48.070 10:53:14 -- bdev/blockdev.sh@291 -- # waitforlisten 146869 00:27:48.070 10:53:14 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:48.070 10:53:14 -- common/autotest_common.sh@819 -- # '[' -z 146869 ']' 00:27:48.070 10:53:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.070 10:53:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.070 10:53:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.070 10:53:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.070 10:53:14 -- common/autotest_common.sh@10 -- # set +x 00:27:48.070 [2024-07-24 10:53:14.656175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:48.070 [2024-07-24 10:53:14.656939] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146869 ] 00:27:48.329 [2024-07-24 10:53:14.812810] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:48.329 [2024-07-24 10:53:14.877386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.329 [2024-07-24 10:53:14.877532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.329 [2024-07-24 10:53:14.877522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:48.896 10:53:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.896 10:53:15 -- common/autotest_common.sh@852 -- # return 0 00:27:48.896 10:53:15 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:49.155 I/O targets: 00:27:49.155 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:49.155 00:27:49.155 00:27:49.155 CUnit - A unit testing framework for C - Version 2.1-3 00:27:49.155 http://cunit.sourceforge.net/ 00:27:49.155 00:27:49.155 00:27:49.155 Suite: bdevio tests on: Nvme0n1 00:27:49.155 Test: blockdev write read block ...passed 00:27:49.155 Test: blockdev write zeroes read block ...passed 00:27:49.155 Test: blockdev write zeroes read no split ...passed 00:27:49.155 Test: blockdev write zeroes read split ...passed 00:27:49.155 Test: blockdev write zeroes read split partial ...passed 00:27:49.155 Test: blockdev reset ...[2024-07-24 10:53:15.679782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:49.155 [2024-07-24 10:53:15.682252] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:49.155 passed 00:27:49.155 Test: blockdev write read 8 blocks ...passed 00:27:49.155 Test: blockdev write read size > 128k ...passed 00:27:49.155 Test: blockdev write read invalid size ...passed 00:27:49.155 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:49.155 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:49.155 Test: blockdev write read max offset ...passed 00:27:49.155 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:49.155 Test: blockdev writev readv 8 blocks ...passed 00:27:49.155 Test: blockdev writev readv 30 x 1block ...passed 00:27:49.155 Test: blockdev writev readv block ...passed 00:27:49.155 Test: blockdev writev readv size > 128k ...passed 00:27:49.155 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:49.155 Test: blockdev comparev and writev ...[2024-07-24 10:53:15.689319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1ee0d000 len:0x1000 00:27:49.155 [2024-07-24 10:53:15.689548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:49.155 passed 00:27:49.155 Test: blockdev nvme passthru rw ...passed 00:27:49.155 Test: blockdev nvme passthru vendor specific ...[2024-07-24 10:53:15.690475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:27:49.155 [2024-07-24 10:53:15.690669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:27:49.155 passed 00:27:49.155 Test: blockdev nvme admin passthru ...passed 00:27:49.155 Test: blockdev copy ...passed 00:27:49.155 00:27:49.155 Run Summary: Type Total Ran Passed Failed Inactive 00:27:49.155 suites 1 1 n/a 0 0 00:27:49.155 tests 23 23 23 0 0 00:27:49.155 asserts 152 152 152 0 n/a 00:27:49.156 00:27:49.156 Elapsed time = 0.078 seconds 00:27:49.156 0 00:27:49.156 10:53:15 -- bdev/blockdev.sh@293 -- # killprocess 146869 00:27:49.156 10:53:15 -- common/autotest_common.sh@926 -- # '[' -z 146869 ']' 00:27:49.156 10:53:15 -- common/autotest_common.sh@930 -- # kill -0 146869 00:27:49.156 10:53:15 -- common/autotest_common.sh@931 -- # uname 00:27:49.156 10:53:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:49.156 10:53:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146869 00:27:49.156 10:53:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:49.156 10:53:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:49.156 killing process with pid 146869 00:27:49.156 10:53:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146869' 00:27:49.156 10:53:15 -- common/autotest_common.sh@945 -- # kill 146869 00:27:49.156 10:53:15 -- common/autotest_common.sh@950 -- # wait 146869 00:27:49.414 10:53:15 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:49.414 00:27:49.414 real 0m1.362s 00:27:49.414 user 0m3.457s 00:27:49.414 sys 0m0.304s 00:27:49.414 10:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:49.414 10:53:15 -- common/autotest_common.sh@10 -- # set +x 00:27:49.414 ************************************ 00:27:49.414 END TEST bdev_bounds 00:27:49.414 ************************************ 00:27:49.414 10:53:16 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:49.414 10:53:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:49.414 10:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:49.415 10:53:16 -- common/autotest_common.sh@10 -- # set +x 00:27:49.415 ************************************ 00:27:49.415 START TEST bdev_nbd 00:27:49.415 ************************************ 00:27:49.415 10:53:16 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:49.415 10:53:16 -- bdev/blockdev.sh@298 -- # uname -s 00:27:49.415 10:53:16 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:49.415 10:53:16 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:49.415 10:53:16 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:49.415 10:53:16 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:27:49.415 10:53:16 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:49.415 10:53:16 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:49.415 10:53:16 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:49.415 10:53:16 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:49.415 10:53:16 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:49.415 10:53:16 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:49.415 10:53:16 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:49.415 10:53:16 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:49.415 10:53:16 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:27:49.415 10:53:16 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:49.415 10:53:16 -- bdev/blockdev.sh@316 -- # nbd_pid=146919 00:27:49.415 10:53:16 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:49.415 10:53:16 -- bdev/blockdev.sh@318 -- # waitforlisten 146919 /var/tmp/spdk-nbd.sock 00:27:49.415 10:53:16 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:49.415 10:53:16 -- common/autotest_common.sh@819 -- # '[' -z 146919 ']' 00:27:49.415 10:53:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:49.415 10:53:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:49.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:49.415 10:53:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:49.415 10:53:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:49.415 10:53:16 -- common/autotest_common.sh@10 -- # set +x 00:27:49.415 [2024-07-24 10:53:16.073915] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:49.415 [2024-07-24 10:53:16.074170] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.674 [2024-07-24 10:53:16.220914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.674 [2024-07-24 10:53:16.288581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.611 10:53:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:50.611 10:53:16 -- common/autotest_common.sh@852 -- # return 0 00:27:50.611 10:53:16 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@24 -- # local i 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:50.611 10:53:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:27:50.611 10:53:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:50.611 10:53:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:50.611 10:53:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:50.611 10:53:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:50.611 10:53:17 -- common/autotest_common.sh@857 -- # local i 00:27:50.611 10:53:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:50.611 10:53:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:50.611 10:53:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:50.612 10:53:17 -- common/autotest_common.sh@861 -- # break 00:27:50.612 10:53:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:50.612 10:53:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:50.612 10:53:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:50.612 1+0 records in 00:27:50.612 1+0 records out 00:27:50.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724031 s, 5.7 MB/s 00:27:50.612 10:53:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:50.612 10:53:17 -- common/autotest_common.sh@874 -- # size=4096 00:27:50.612 10:53:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:50.612 10:53:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:50.612 10:53:17 -- common/autotest_common.sh@877 -- # return 0 00:27:50.612 10:53:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:50.612 10:53:17 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:50.612 10:53:17 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:50.871 { 00:27:50.871 "nbd_device": "/dev/nbd0", 00:27:50.871 "bdev_name": "Nvme0n1" 00:27:50.871 } 00:27:50.871 ]' 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:50.871 { 00:27:50.871 "nbd_device": "/dev/nbd0", 00:27:50.871 "bdev_name": "Nvme0n1" 00:27:50.871 } 00:27:50.871 ]' 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@51 -- # local i 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:50.871 10:53:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@41 -- # break 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@45 -- # return 0 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:51.130 10:53:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:51.390 10:53:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:51.390 10:53:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:51.390 10:53:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@65 -- # true 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@65 -- # count=0 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@122 -- # count=0 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@127 -- # return 0 00:27:51.390 10:53:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@12 -- # local i 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:51.390 10:53:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:27:51.649 /dev/nbd0 00:27:51.649 10:53:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:51.649 10:53:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:51.649 10:53:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:51.649 10:53:18 -- common/autotest_common.sh@857 -- # local i 00:27:51.649 10:53:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:51.649 10:53:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:51.649 10:53:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:51.649 10:53:18 -- common/autotest_common.sh@861 -- # break 00:27:51.649 10:53:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:51.649 10:53:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:51.649 10:53:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:51.649 1+0 records in 00:27:51.649 1+0 records out 00:27:51.649 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486358 s, 8.4 MB/s 00:27:51.649 10:53:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.649 10:53:18 -- common/autotest_common.sh@874 -- # size=4096 00:27:51.649 10:53:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.649 10:53:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:51.649 10:53:18 -- common/autotest_common.sh@877 -- # return 0 00:27:51.649 10:53:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:51.649 10:53:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:51.649 10:53:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:51.649 10:53:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:51.908 10:53:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:52.168 { 00:27:52.168 "nbd_device": "/dev/nbd0", 00:27:52.168 "bdev_name": "Nvme0n1" 00:27:52.168 } 00:27:52.168 ]' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:52.168 { 00:27:52.168 "nbd_device": "/dev/nbd0", 00:27:52.168 "bdev_name": "Nvme0n1" 00:27:52.168 } 00:27:52.168 ]' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@65 -- # count=1 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@95 -- # count=1 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:52.168 256+0 records in 00:27:52.168 256+0 records out 00:27:52.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104735 s, 100 MB/s 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:52.168 256+0 records in 00:27:52.168 256+0 records out 00:27:52.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.069122 s, 15.2 MB/s 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@51 -- # local i 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:52.168 10:53:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@41 -- # break 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@45 -- # return 0 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.428 10:53:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@65 -- # true 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@65 -- # count=0 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@104 -- # count=0 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@109 -- # return 0 00:27:52.687 10:53:19 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:52.687 10:53:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:52.946 malloc_lvol_verify 00:27:52.946 10:53:19 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:53.205 b6cf44ad-527a-42cb-9413-7ded20e53d30 00:27:53.205 10:53:19 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:53.464 84897171-356e-42dd-a17b-b65f38f40bb0 00:27:53.464 10:53:20 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:53.779 /dev/nbd0 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:53.779 mke2fs 1.46.5 (30-Dec-2021) 00:27:53.779 00:27:53.779 Filesystem too small for a journal 00:27:53.779 Discarding device blocks: 0/1024 done 00:27:53.779 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:53.779 00:27:53.779 Allocating group tables: 0/1 done 00:27:53.779 Writing inode tables: 0/1 done 00:27:53.779 Writing superblocks and filesystem accounting information: 0/1 done 00:27:53.779 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@51 -- # local i 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:53.779 10:53:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@41 -- # break 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@45 -- # return 0 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:54.037 10:53:20 -- bdev/nbd_common.sh@147 -- # return 0 00:27:54.037 10:53:20 -- bdev/blockdev.sh@324 -- # killprocess 146919 00:27:54.037 10:53:20 -- common/autotest_common.sh@926 -- # '[' -z 146919 ']' 00:27:54.037 10:53:20 -- common/autotest_common.sh@930 -- # kill -0 146919 00:27:54.037 10:53:20 -- common/autotest_common.sh@931 -- # uname 00:27:54.037 10:53:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:54.037 10:53:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 146919 00:27:54.037 10:53:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:54.037 10:53:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:54.037 killing process with pid 146919 00:27:54.037 10:53:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 146919' 00:27:54.037 10:53:20 -- common/autotest_common.sh@945 -- # kill 146919 00:27:54.037 10:53:20 -- common/autotest_common.sh@950 -- # wait 146919 00:27:54.296 10:53:20 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:54.296 00:27:54.296 real 0m4.807s 00:27:54.296 user 0m7.366s 00:27:54.296 sys 0m1.125s 00:27:54.296 ************************************ 00:27:54.296 END TEST bdev_nbd 00:27:54.296 10:53:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.296 10:53:20 -- common/autotest_common.sh@10 -- # set +x 00:27:54.296 ************************************ 00:27:54.296 10:53:20 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:54.296 skipping fio tests on NVMe due to multi-ns failures. 00:27:54.296 10:53:20 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:27:54.296 10:53:20 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:54.296 10:53:20 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:54.296 10:53:20 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:54.296 10:53:20 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:54.296 10:53:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.296 10:53:20 -- common/autotest_common.sh@10 -- # set +x 00:27:54.296 ************************************ 00:27:54.296 START TEST bdev_verify 00:27:54.296 ************************************ 00:27:54.296 10:53:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:54.296 [2024-07-24 10:53:20.929425] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:27:54.296 [2024-07-24 10:53:20.930285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147108 ] 00:27:54.556 [2024-07-24 10:53:21.080796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:54.556 [2024-07-24 10:53:21.140694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.556 [2024-07-24 10:53:21.140698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.814 Running I/O for 5 seconds... 00:28:00.081 00:28:00.081 Latency(us) 00:28:00.081 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.081 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:00.081 Verification LBA range: start 0x0 length 0xa0000 00:28:00.081 Nvme0n1 : 5.01 18174.58 70.99 0.00 0.00 7011.31 662.81 13047.62 00:28:00.081 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:00.081 Verification LBA range: start 0xa0000 length 0xa0000 00:28:00.081 Nvme0n1 : 5.01 18137.35 70.85 0.00 0.00 7025.33 355.61 13881.72 00:28:00.081 =================================================================================================================== 00:28:00.081 Total : 36311.92 141.84 0.00 0.00 7018.32 355.61 13881.72 00:28:10.097 00:28:10.097 real 0m14.050s 00:28:10.097 user 0m27.303s 00:28:10.097 sys 0m0.313s 00:28:10.097 10:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:10.097 ************************************ 00:28:10.097 10:53:34 -- common/autotest_common.sh@10 -- # set +x 00:28:10.097 END TEST bdev_verify 00:28:10.097 ************************************ 00:28:10.098 10:53:34 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:10.098 10:53:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:10.098 10:53:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:10.098 10:53:34 -- common/autotest_common.sh@10 -- # set +x 00:28:10.098 ************************************ 00:28:10.098 START TEST bdev_verify_big_io 00:28:10.098 ************************************ 00:28:10.098 10:53:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:10.098 [2024-07-24 10:53:35.036032] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:10.098 [2024-07-24 10:53:35.036283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147291 ] 00:28:10.098 [2024-07-24 10:53:35.184878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:10.098 [2024-07-24 10:53:35.252414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.098 [2024-07-24 10:53:35.252418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.098 Running I/O for 5 seconds... 00:28:14.288 00:28:14.288 Latency(us) 00:28:14.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:14.288 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:14.288 Verification LBA range: start 0x0 length 0xa000 00:28:14.288 Nvme0n1 : 5.03 1645.25 102.83 0.00 0.00 76662.28 856.44 131548.63 00:28:14.288 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:14.288 Verification LBA range: start 0xa000 length 0xa000 00:28:14.288 Nvme0n1 : 5.03 1532.71 95.79 0.00 0.00 82289.45 539.93 126782.37 00:28:14.288 =================================================================================================================== 00:28:14.288 Total : 3177.96 198.62 0.00 0.00 79376.58 539.93 131548.63 00:28:14.548 00:28:14.548 real 0m6.086s 00:28:14.548 user 0m11.437s 00:28:14.548 sys 0m0.226s 00:28:14.548 10:53:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.548 10:53:41 -- common/autotest_common.sh@10 -- # set +x 00:28:14.548 ************************************ 00:28:14.548 END TEST bdev_verify_big_io 00:28:14.548 ************************************ 00:28:14.548 10:53:41 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:14.548 10:53:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:14.548 10:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:14.548 10:53:41 -- common/autotest_common.sh@10 -- # set +x 00:28:14.548 ************************************ 00:28:14.548 START TEST bdev_write_zeroes 00:28:14.548 ************************************ 00:28:14.548 10:53:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:14.548 [2024-07-24 10:53:41.174711] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:14.548 [2024-07-24 10:53:41.174956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147386 ] 00:28:14.808 [2024-07-24 10:53:41.321416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.808 [2024-07-24 10:53:41.388187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.067 Running I/O for 1 seconds... 00:28:16.018 00:28:16.018 Latency(us) 00:28:16.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.018 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:16.018 Nvme0n1 : 1.00 64446.83 251.75 0.00 0.00 1980.69 666.53 9651.67 00:28:16.018 =================================================================================================================== 00:28:16.018 Total : 64446.83 251.75 0.00 0.00 1980.69 666.53 9651.67 00:28:16.277 00:28:16.277 real 0m1.720s 00:28:16.277 user 0m1.436s 00:28:16.277 sys 0m0.185s 00:28:16.277 10:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.277 10:53:42 -- common/autotest_common.sh@10 -- # set +x 00:28:16.277 ************************************ 00:28:16.277 END TEST bdev_write_zeroes 00:28:16.277 ************************************ 00:28:16.277 10:53:42 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:16.277 10:53:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:16.277 10:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.277 10:53:42 -- common/autotest_common.sh@10 -- # set +x 00:28:16.277 ************************************ 00:28:16.277 START TEST bdev_json_nonenclosed 00:28:16.277 ************************************ 00:28:16.277 10:53:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:16.277 [2024-07-24 10:53:42.933778] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:16.277 [2024-07-24 10:53:42.934017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147424 ] 00:28:16.537 [2024-07-24 10:53:43.072361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.537 [2024-07-24 10:53:43.135884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.537 [2024-07-24 10:53:43.136125] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:16.537 [2024-07-24 10:53:43.136168] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:16.796 00:28:16.796 real 0m0.339s 00:28:16.796 user 0m0.141s 00:28:16.796 sys 0m0.098s 00:28:16.796 10:53:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:16.796 ************************************ 00:28:16.796 END TEST bdev_json_nonenclosed 00:28:16.796 ************************************ 00:28:16.796 10:53:43 -- common/autotest_common.sh@10 -- # set +x 00:28:16.796 10:53:43 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:16.796 10:53:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:16.796 10:53:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:16.796 10:53:43 -- common/autotest_common.sh@10 -- # set +x 00:28:16.796 ************************************ 00:28:16.796 START TEST bdev_json_nonarray 00:28:16.796 ************************************ 00:28:16.796 10:53:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:16.796 [2024-07-24 10:53:43.335125] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:16.796 [2024-07-24 10:53:43.335394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147455 ] 00:28:16.796 [2024-07-24 10:53:43.479066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.056 [2024-07-24 10:53:43.560770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.056 [2024-07-24 10:53:43.561291] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:17.056 [2024-07-24 10:53:43.561464] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:17.056 00:28:17.056 real 0m0.382s 00:28:17.056 user 0m0.193s 00:28:17.056 sys 0m0.089s 00:28:17.056 10:53:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.056 ************************************ 00:28:17.056 END TEST bdev_json_nonarray 00:28:17.056 ************************************ 00:28:17.056 10:53:43 -- common/autotest_common.sh@10 -- # set +x 00:28:17.056 10:53:43 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:28:17.056 10:53:43 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:28:17.056 10:53:43 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:28:17.056 10:53:43 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:17.056 10:53:43 -- bdev/blockdev.sh@809 -- # cleanup 00:28:17.056 10:53:43 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:17.056 10:53:43 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:17.056 10:53:43 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:28:17.056 10:53:43 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:28:17.056 10:53:43 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:28:17.056 10:53:43 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:28:17.056 00:28:17.056 real 0m31.776s 00:28:17.056 user 0m53.985s 00:28:17.056 sys 0m3.208s 00:28:17.056 10:53:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.056 10:53:43 -- common/autotest_common.sh@10 -- # set +x 00:28:17.056 ************************************ 00:28:17.056 END TEST blockdev_nvme 00:28:17.056 ************************************ 00:28:17.315 10:53:43 -- spdk/autotest.sh@219 -- # uname -s 00:28:17.315 10:53:43 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:28:17.315 10:53:43 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:17.315 10:53:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:17.315 10:53:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:17.315 10:53:43 -- common/autotest_common.sh@10 -- # set +x 00:28:17.315 ************************************ 00:28:17.315 START TEST blockdev_nvme_gpt 00:28:17.315 ************************************ 00:28:17.315 10:53:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:17.315 * Looking for test storage... 00:28:17.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:17.315 10:53:43 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:17.315 10:53:43 -- bdev/nbd_common.sh@6 -- # set -e 00:28:17.315 10:53:43 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:17.315 10:53:43 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:17.315 10:53:43 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:17.315 10:53:43 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:17.315 10:53:43 -- bdev/blockdev.sh@18 -- # : 00:28:17.315 10:53:43 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:17.315 10:53:43 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:17.315 10:53:43 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:17.315 10:53:43 -- bdev/blockdev.sh@672 -- # uname -s 00:28:17.315 10:53:43 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:17.315 10:53:43 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:17.315 10:53:43 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:28:17.315 10:53:43 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:17.315 10:53:43 -- bdev/blockdev.sh@682 -- # dek= 00:28:17.315 10:53:43 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:17.315 10:53:43 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:17.315 10:53:43 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:17.315 10:53:43 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:28:17.315 10:53:43 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:28:17.315 10:53:43 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:17.315 10:53:43 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147529 00:28:17.315 10:53:43 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:17.315 10:53:43 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:17.316 10:53:43 -- bdev/blockdev.sh@47 -- # waitforlisten 147529 00:28:17.316 10:53:43 -- common/autotest_common.sh@819 -- # '[' -z 147529 ']' 00:28:17.316 10:53:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.316 10:53:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:17.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.316 10:53:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.316 10:53:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:17.316 10:53:43 -- common/autotest_common.sh@10 -- # set +x 00:28:17.316 [2024-07-24 10:53:43.912136] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:17.316 [2024-07-24 10:53:43.912909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147529 ] 00:28:17.575 [2024-07-24 10:53:44.055840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.575 [2024-07-24 10:53:44.130537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:17.575 [2024-07-24 10:53:44.130871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.510 10:53:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:18.510 10:53:44 -- common/autotest_common.sh@852 -- # return 0 00:28:18.510 10:53:44 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:18.510 10:53:44 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:28:18.510 10:53:44 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:18.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:18.510 Waiting for block devices as requested 00:28:18.769 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.769 10:53:45 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:28:18.769 10:53:45 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:28:18.769 10:53:45 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:28:18.769 10:53:45 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:28:18.769 10:53:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:28:18.769 10:53:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:28:18.769 10:53:45 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:28:18.769 10:53:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:18.769 10:53:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:28:18.769 10:53:45 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:28:18.769 10:53:45 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:28:18.769 10:53:45 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:28:18.769 10:53:45 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:28:18.769 10:53:45 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:28:18.769 10:53:45 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:28:18.769 10:53:45 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:28:18.769 10:53:45 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:28:18.769 BYT; 00:28:18.769 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:28:18.769 10:53:45 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:28:18.769 BYT; 00:28:18.769 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:28:18.769 10:53:45 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:28:18.769 10:53:45 -- bdev/blockdev.sh@114 -- # break 00:28:18.769 10:53:45 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:28:18.769 10:53:45 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:28:18.769 10:53:45 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:18.769 10:53:45 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:28:19.028 10:53:45 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:28:19.028 10:53:45 -- scripts/common.sh@410 -- # local spdk_guid 00:28:19.028 10:53:45 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:19.028 10:53:45 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:19.028 10:53:45 -- scripts/common.sh@415 -- # IFS='()' 00:28:19.028 10:53:45 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:28:19.028 10:53:45 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:19.028 10:53:45 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:28:19.028 10:53:45 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:19.028 10:53:45 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:19.028 10:53:45 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:19.028 10:53:45 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:28:19.028 10:53:45 -- scripts/common.sh@422 -- # local spdk_guid 00:28:19.028 10:53:45 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:19.028 10:53:45 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:19.028 10:53:45 -- scripts/common.sh@427 -- # IFS='()' 00:28:19.028 10:53:45 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:28:19.028 10:53:45 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:19.028 10:53:45 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:28:19.028 10:53:45 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:19.028 10:53:45 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:19.028 10:53:45 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:19.028 10:53:45 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:28:20.405 The operation has completed successfully. 00:28:20.405 10:53:46 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:28:21.342 The operation has completed successfully. 00:28:21.342 10:53:47 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:21.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:21.601 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.538 10:53:49 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 [] 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.538 10:53:49 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:28:22.538 10:53:49 -- bdev/blockdev.sh@79 -- # local json 00:28:22.538 10:53:49 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:28:22.538 10:53:49 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:22.538 10:53:49 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.538 10:53:49 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.538 10:53:49 -- bdev/blockdev.sh@738 -- # cat 00:28:22.538 10:53:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.538 10:53:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.538 10:53:49 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.538 10:53:49 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:22.538 10:53:49 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:22.538 10:53:49 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:22.538 10:53:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.538 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:22.538 10:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.797 10:53:49 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:22.797 10:53:49 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:28:22.797 10:53:49 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:22.797 10:53:49 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:22.797 10:53:49 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:28:22.797 10:53:49 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:22.797 10:53:49 -- bdev/blockdev.sh@752 -- # killprocess 147529 00:28:22.797 10:53:49 -- common/autotest_common.sh@926 -- # '[' -z 147529 ']' 00:28:22.797 10:53:49 -- common/autotest_common.sh@930 -- # kill -0 147529 00:28:22.797 10:53:49 -- common/autotest_common.sh@931 -- # uname 00:28:22.797 10:53:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:22.797 10:53:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147529 00:28:22.797 10:53:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:22.797 killing process with pid 147529 00:28:22.797 10:53:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:22.797 10:53:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147529' 00:28:22.797 10:53:49 -- common/autotest_common.sh@945 -- # kill 147529 00:28:22.797 10:53:49 -- common/autotest_common.sh@950 -- # wait 147529 00:28:23.365 10:53:49 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:23.365 10:53:49 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:23.365 10:53:49 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:23.365 10:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:23.365 10:53:49 -- common/autotest_common.sh@10 -- # set +x 00:28:23.365 ************************************ 00:28:23.365 START TEST bdev_hello_world 00:28:23.365 ************************************ 00:28:23.365 10:53:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:23.365 [2024-07-24 10:53:49.838113] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:23.365 [2024-07-24 10:53:49.839044] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147948 ] 00:28:23.365 [2024-07-24 10:53:49.990131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.624 [2024-07-24 10:53:50.087483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.624 [2024-07-24 10:53:50.306676] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:23.624 [2024-07-24 10:53:50.306808] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:28:23.624 [2024-07-24 10:53:50.306888] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:23.624 [2024-07-24 10:53:50.309694] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:23.624 [2024-07-24 10:53:50.310356] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:23.624 [2024-07-24 10:53:50.310412] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:23.624 [2024-07-24 10:53:50.310707] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:23.624 00:28:23.624 [2024-07-24 10:53:50.310774] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:24.192 00:28:24.192 real 0m0.784s 00:28:24.192 user 0m0.493s 00:28:24.192 sys 0m0.189s 00:28:24.192 10:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.192 ************************************ 00:28:24.192 END TEST bdev_hello_world 00:28:24.192 ************************************ 00:28:24.192 10:53:50 -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 10:53:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:24.192 10:53:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:24.192 10:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.192 10:53:50 -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 ************************************ 00:28:24.192 START TEST bdev_bounds 00:28:24.192 ************************************ 00:28:24.192 10:53:50 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:28:24.192 10:53:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=147980 00:28:24.192 Process bdevio pid: 147980 00:28:24.192 10:53:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:24.192 10:53:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 147980' 00:28:24.192 10:53:50 -- bdev/blockdev.sh@291 -- # waitforlisten 147980 00:28:24.192 10:53:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:24.192 10:53:50 -- common/autotest_common.sh@819 -- # '[' -z 147980 ']' 00:28:24.192 10:53:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.192 10:53:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.192 10:53:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.192 10:53:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.192 10:53:50 -- common/autotest_common.sh@10 -- # set +x 00:28:24.192 [2024-07-24 10:53:50.678171] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:24.192 [2024-07-24 10:53:50.678493] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147980 ] 00:28:24.192 [2024-07-24 10:53:50.840667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:24.450 [2024-07-24 10:53:50.940654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.450 [2024-07-24 10:53:50.940821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.450 [2024-07-24 10:53:50.940823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.016 10:53:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:25.016 10:53:51 -- common/autotest_common.sh@852 -- # return 0 00:28:25.016 10:53:51 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:25.275 I/O targets: 00:28:25.275 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:28:25.275 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:28:25.275 00:28:25.275 00:28:25.275 CUnit - A unit testing framework for C - Version 2.1-3 00:28:25.275 http://cunit.sourceforge.net/ 00:28:25.275 00:28:25.275 00:28:25.275 Suite: bdevio tests on: Nvme0n1p2 00:28:25.275 Test: blockdev write read block ...passed 00:28:25.275 Test: blockdev write zeroes read block ...passed 00:28:25.275 Test: blockdev write zeroes read no split ...passed 00:28:25.275 Test: blockdev write zeroes read split ...passed 00:28:25.275 Test: blockdev write zeroes read split partial ...passed 00:28:25.275 Test: blockdev reset ...[2024-07-24 10:53:51.803210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:25.275 [2024-07-24 10:53:51.806160] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:25.275 passed 00:28:25.275 Test: blockdev write read 8 blocks ...passed 00:28:25.275 Test: blockdev write read size > 128k ...passed 00:28:25.275 Test: blockdev write read invalid size ...passed 00:28:25.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:25.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:25.275 Test: blockdev write read max offset ...passed 00:28:25.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:25.275 Test: blockdev writev readv 8 blocks ...passed 00:28:25.275 Test: blockdev writev readv 30 x 1block ...passed 00:28:25.275 Test: blockdev writev readv block ...passed 00:28:25.275 Test: blockdev writev readv size > 128k ...passed 00:28:25.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:25.275 Test: blockdev comparev and writev ...[2024-07-24 10:53:51.812928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x3860b000 len:0x1000 00:28:25.275 [2024-07-24 10:53:51.813065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:25.275 passed 00:28:25.275 Test: blockdev nvme passthru rw ...passed 00:28:25.275 Test: blockdev nvme passthru vendor specific ...passed 00:28:25.275 Test: blockdev nvme admin passthru ...passed 00:28:25.275 Test: blockdev copy ...passed 00:28:25.275 Suite: bdevio tests on: Nvme0n1p1 00:28:25.275 Test: blockdev write read block ...passed 00:28:25.275 Test: blockdev write zeroes read block ...passed 00:28:25.275 Test: blockdev write zeroes read no split ...passed 00:28:25.275 Test: blockdev write zeroes read split ...passed 00:28:25.275 Test: blockdev write zeroes read split partial ...passed 00:28:25.275 Test: blockdev reset ...[2024-07-24 10:53:51.829172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:25.275 [2024-07-24 10:53:51.831349] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:25.275 passed 00:28:25.275 Test: blockdev write read 8 blocks ...passed 00:28:25.275 Test: blockdev write read size > 128k ...passed 00:28:25.275 Test: blockdev write read invalid size ...passed 00:28:25.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:25.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:25.275 Test: blockdev write read max offset ...passed 00:28:25.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:25.275 Test: blockdev writev readv 8 blocks ...passed 00:28:25.275 Test: blockdev writev readv 30 x 1block ...passed 00:28:25.275 Test: blockdev writev readv block ...passed 00:28:25.275 Test: blockdev writev readv size > 128k ...passed 00:28:25.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:25.275 Test: blockdev comparev and writev ...[2024-07-24 10:53:51.837314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x3860d000 len:0x1000 00:28:25.275 [2024-07-24 10:53:51.837412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:25.275 passed 00:28:25.275 Test: blockdev nvme passthru rw ...passed 00:28:25.275 Test: blockdev nvme passthru vendor specific ...passed 00:28:25.275 Test: blockdev nvme admin passthru ...passed 00:28:25.275 Test: blockdev copy ...passed 00:28:25.275 00:28:25.275 Run Summary: Type Total Ran Passed Failed Inactive 00:28:25.275 suites 2 2 n/a 0 0 00:28:25.275 tests 46 46 46 0 0 00:28:25.275 asserts 284 284 284 0 n/a 00:28:25.275 00:28:25.275 Elapsed time = 0.120 seconds 00:28:25.275 0 00:28:25.275 10:53:51 -- bdev/blockdev.sh@293 -- # killprocess 147980 00:28:25.275 10:53:51 -- common/autotest_common.sh@926 -- # '[' -z 147980 ']' 00:28:25.275 10:53:51 -- common/autotest_common.sh@930 -- # kill -0 147980 00:28:25.275 10:53:51 -- common/autotest_common.sh@931 -- # uname 00:28:25.275 10:53:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:25.275 10:53:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 147980 00:28:25.275 10:53:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:25.275 10:53:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:25.275 killing process with pid 147980 00:28:25.275 10:53:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 147980' 00:28:25.275 10:53:51 -- common/autotest_common.sh@945 -- # kill 147980 00:28:25.275 10:53:51 -- common/autotest_common.sh@950 -- # wait 147980 00:28:25.534 10:53:52 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:25.534 00:28:25.534 real 0m1.500s 00:28:25.534 user 0m3.739s 00:28:25.534 sys 0m0.391s 00:28:25.534 ************************************ 00:28:25.534 END TEST bdev_bounds 00:28:25.534 ************************************ 00:28:25.534 10:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.534 10:53:52 -- common/autotest_common.sh@10 -- # set +x 00:28:25.534 10:53:52 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:25.534 10:53:52 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:28:25.534 10:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:25.534 10:53:52 -- common/autotest_common.sh@10 -- # set +x 00:28:25.534 ************************************ 00:28:25.534 START TEST bdev_nbd 00:28:25.534 ************************************ 00:28:25.534 10:53:52 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:25.534 10:53:52 -- bdev/blockdev.sh@298 -- # uname -s 00:28:25.534 10:53:52 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:25.534 10:53:52 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:25.534 10:53:52 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:25.534 10:53:52 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:28:25.534 10:53:52 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:25.534 10:53:52 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:28:25.534 10:53:52 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:25.534 10:53:52 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:25.534 10:53:52 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:25.534 10:53:52 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:28:25.534 10:53:52 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:25.534 10:53:52 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:25.534 10:53:52 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:25.534 10:53:52 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:25.534 10:53:52 -- bdev/blockdev.sh@316 -- # nbd_pid=148031 00:28:25.534 10:53:52 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:25.534 10:53:52 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:25.534 10:53:52 -- bdev/blockdev.sh@318 -- # waitforlisten 148031 /var/tmp/spdk-nbd.sock 00:28:25.534 10:53:52 -- common/autotest_common.sh@819 -- # '[' -z 148031 ']' 00:28:25.534 10:53:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:25.534 10:53:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:25.534 10:53:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:25.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:25.534 10:53:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:25.534 10:53:52 -- common/autotest_common.sh@10 -- # set +x 00:28:25.793 [2024-07-24 10:53:52.245736] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:25.793 [2024-07-24 10:53:52.245974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:25.793 [2024-07-24 10:53:52.399086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.051 [2024-07-24 10:53:52.490865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.619 10:53:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:26.619 10:53:53 -- common/autotest_common.sh@852 -- # return 0 00:28:26.619 10:53:53 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@24 -- # local i 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:26.619 10:53:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:28:26.877 10:53:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:26.877 10:53:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:26.877 10:53:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:26.877 10:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:26.877 10:53:53 -- common/autotest_common.sh@857 -- # local i 00:28:26.877 10:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:26.877 10:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:26.877 10:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:26.877 10:53:53 -- common/autotest_common.sh@861 -- # break 00:28:26.877 10:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:26.877 10:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:26.877 10:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:26.877 1+0 records in 00:28:26.877 1+0 records out 00:28:26.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653742 s, 6.3 MB/s 00:28:26.877 10:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:26.877 10:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:28:26.877 10:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:26.877 10:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:26.877 10:53:53 -- common/autotest_common.sh@877 -- # return 0 00:28:26.877 10:53:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:26.877 10:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:26.877 10:53:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:28:27.133 10:53:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:27.134 10:53:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:27.134 10:53:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:27.134 10:53:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:27.134 10:53:53 -- common/autotest_common.sh@857 -- # local i 00:28:27.134 10:53:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:27.134 10:53:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:27.134 10:53:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:27.134 10:53:53 -- common/autotest_common.sh@861 -- # break 00:28:27.134 10:53:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:27.134 10:53:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:27.134 10:53:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:27.134 1+0 records in 00:28:27.134 1+0 records out 00:28:27.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112019 s, 3.7 MB/s 00:28:27.134 10:53:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:27.134 10:53:53 -- common/autotest_common.sh@874 -- # size=4096 00:28:27.134 10:53:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:27.134 10:53:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:27.134 10:53:53 -- common/autotest_common.sh@877 -- # return 0 00:28:27.134 10:53:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:27.134 10:53:53 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:27.134 10:53:53 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:27.391 10:53:54 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:27.391 { 00:28:27.391 "nbd_device": "/dev/nbd0", 00:28:27.391 "bdev_name": "Nvme0n1p1" 00:28:27.391 }, 00:28:27.391 { 00:28:27.391 "nbd_device": "/dev/nbd1", 00:28:27.391 "bdev_name": "Nvme0n1p2" 00:28:27.391 } 00:28:27.391 ]' 00:28:27.391 10:53:54 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:27.391 10:53:54 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:27.391 { 00:28:27.391 "nbd_device": "/dev/nbd0", 00:28:27.391 "bdev_name": "Nvme0n1p1" 00:28:27.391 }, 00:28:27.391 { 00:28:27.391 "nbd_device": "/dev/nbd1", 00:28:27.391 "bdev_name": "Nvme0n1p2" 00:28:27.391 } 00:28:27.391 ]' 00:28:27.391 10:53:54 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@51 -- # local i 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:27.650 10:53:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@41 -- # break 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@45 -- # return 0 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:27.909 10:53:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@41 -- # break 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@45 -- # return 0 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.168 10:53:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@65 -- # true 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@65 -- # count=0 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@122 -- # count=0 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@127 -- # return 0 00:28:28.428 10:53:54 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@12 -- # local i 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:28.428 10:53:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:28:28.687 /dev/nbd0 00:28:28.687 10:53:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:28.687 10:53:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:28.687 10:53:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:28.687 10:53:55 -- common/autotest_common.sh@857 -- # local i 00:28:28.687 10:53:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:28.687 10:53:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:28.687 10:53:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:28.687 10:53:55 -- common/autotest_common.sh@861 -- # break 00:28:28.687 10:53:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:28.687 10:53:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:28.687 10:53:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:28.687 1+0 records in 00:28:28.687 1+0 records out 00:28:28.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000875361 s, 4.7 MB/s 00:28:28.687 10:53:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:28.687 10:53:55 -- common/autotest_common.sh@874 -- # size=4096 00:28:28.687 10:53:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:28.687 10:53:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:28.687 10:53:55 -- common/autotest_common.sh@877 -- # return 0 00:28:28.687 10:53:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:28.687 10:53:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:28.687 10:53:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:28:28.946 /dev/nbd1 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:28.946 10:53:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:28.946 10:53:55 -- common/autotest_common.sh@857 -- # local i 00:28:28.946 10:53:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:28.946 10:53:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:28.946 10:53:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:28.946 10:53:55 -- common/autotest_common.sh@861 -- # break 00:28:28.946 10:53:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:28.946 10:53:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:28.946 10:53:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:28.946 1+0 records in 00:28:28.946 1+0 records out 00:28:28.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977926 s, 4.2 MB/s 00:28:28.946 10:53:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:28.946 10:53:55 -- common/autotest_common.sh@874 -- # size=4096 00:28:28.946 10:53:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:28.946 10:53:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:28.946 10:53:55 -- common/autotest_common.sh@877 -- # return 0 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:28.946 10:53:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:29.205 { 00:28:29.205 "nbd_device": "/dev/nbd0", 00:28:29.205 "bdev_name": "Nvme0n1p1" 00:28:29.205 }, 00:28:29.205 { 00:28:29.205 "nbd_device": "/dev/nbd1", 00:28:29.205 "bdev_name": "Nvme0n1p2" 00:28:29.205 } 00:28:29.205 ]' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:29.205 { 00:28:29.205 "nbd_device": "/dev/nbd0", 00:28:29.205 "bdev_name": "Nvme0n1p1" 00:28:29.205 }, 00:28:29.205 { 00:28:29.205 "nbd_device": "/dev/nbd1", 00:28:29.205 "bdev_name": "Nvme0n1p2" 00:28:29.205 } 00:28:29.205 ]' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:29.205 /dev/nbd1' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:29.205 /dev/nbd1' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@65 -- # count=2 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@95 -- # count=2 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:29.205 256+0 records in 00:28:29.205 256+0 records out 00:28:29.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111584 s, 94.0 MB/s 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:29.205 10:53:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:29.464 256+0 records in 00:28:29.464 256+0 records out 00:28:29.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127035 s, 8.3 MB/s 00:28:29.464 10:53:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:29.464 10:53:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:29.464 256+0 records in 00:28:29.464 256+0 records out 00:28:29.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112854 s, 9.3 MB/s 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:29.464 10:53:56 -- bdev/nbd_common.sh@51 -- # local i 00:28:29.465 10:53:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:29.465 10:53:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@41 -- # break 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@45 -- # return 0 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:29.722 10:53:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@41 -- # break 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@45 -- # return 0 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.290 10:53:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:30.563 10:53:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:30.563 10:53:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:30.563 10:53:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@65 -- # true 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@65 -- # count=0 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@104 -- # count=0 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@109 -- # return 0 00:28:30.563 10:53:57 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:30.563 10:53:57 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:30.830 malloc_lvol_verify 00:28:30.830 10:53:57 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:30.830 aba3406b-0ecf-40f8-8ba2-6171b95c4486 00:28:31.088 10:53:57 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:31.088 4a919f54-15e3-453d-b268-dbb049874f9f 00:28:31.347 10:53:57 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:31.347 /dev/nbd0 00:28:31.347 10:53:58 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:31.347 mke2fs 1.46.5 (30-Dec-2021) 00:28:31.347 00:28:31.347 Filesystem too small for a journal 00:28:31.347 Discarding device blocks: 0/1024 done 00:28:31.347 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:31.347 00:28:31.347 Allocating group tables: 0/1 done 00:28:31.348 Writing inode tables: 0/1 done 00:28:31.348 Writing superblocks and filesystem accounting information: 0/1 done 00:28:31.348 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@51 -- # local i 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:31.348 10:53:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@41 -- # break 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@45 -- # return 0 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:31.607 10:53:58 -- bdev/nbd_common.sh@147 -- # return 0 00:28:31.607 10:53:58 -- bdev/blockdev.sh@324 -- # killprocess 148031 00:28:31.607 10:53:58 -- common/autotest_common.sh@926 -- # '[' -z 148031 ']' 00:28:31.607 10:53:58 -- common/autotest_common.sh@930 -- # kill -0 148031 00:28:31.607 10:53:58 -- common/autotest_common.sh@931 -- # uname 00:28:31.607 10:53:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:31.607 10:53:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148031 00:28:31.607 10:53:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:31.607 10:53:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:31.607 killing process with pid 148031 00:28:31.607 10:53:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148031' 00:28:31.607 10:53:58 -- common/autotest_common.sh@945 -- # kill 148031 00:28:31.607 10:53:58 -- common/autotest_common.sh@950 -- # wait 148031 00:28:32.175 10:53:58 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:32.175 00:28:32.175 real 0m6.386s 00:28:32.175 user 0m9.779s 00:28:32.175 sys 0m1.543s 00:28:32.175 ************************************ 00:28:32.175 10:53:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.175 10:53:58 -- common/autotest_common.sh@10 -- # set +x 00:28:32.175 END TEST bdev_nbd 00:28:32.175 ************************************ 00:28:32.175 10:53:58 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:32.175 10:53:58 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:28:32.175 10:53:58 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:28:32.175 10:53:58 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:32.175 skipping fio tests on NVMe due to multi-ns failures. 00:28:32.175 10:53:58 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:32.175 10:53:58 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:32.175 10:53:58 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:32.175 10:53:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:32.175 10:53:58 -- common/autotest_common.sh@10 -- # set +x 00:28:32.175 ************************************ 00:28:32.175 START TEST bdev_verify 00:28:32.175 ************************************ 00:28:32.175 10:53:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:32.175 [2024-07-24 10:53:58.690386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:32.175 [2024-07-24 10:53:58.690662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148283 ] 00:28:32.175 [2024-07-24 10:53:58.848565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:32.434 [2024-07-24 10:53:58.958083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.434 [2024-07-24 10:53:58.958095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.692 Running I/O for 5 seconds... 00:28:37.960 00:28:37.960 Latency(us) 00:28:37.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.960 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.960 Verification LBA range: start 0x0 length 0x4ff80 00:28:37.960 Nvme0n1p1 : 5.01 6915.58 27.01 0.00 0.00 18461.34 1712.87 33602.09 00:28:37.960 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.960 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:37.960 Nvme0n1p1 : 5.01 6980.66 27.27 0.00 0.00 18290.27 1429.88 29074.15 00:28:37.960 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.960 Verification LBA range: start 0x0 length 0x4ff7f 00:28:37.960 Nvme0n1p2 : 5.02 6920.46 27.03 0.00 0.00 18434.71 487.80 34317.03 00:28:37.960 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:37.960 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:37.960 Nvme0n1p2 : 5.02 6985.54 27.29 0.00 0.00 18263.33 385.40 31218.97 00:28:37.960 =================================================================================================================== 00:28:37.960 Total : 27802.24 108.60 0.00 0.00 18361.99 385.40 34317.03 00:28:42.148 00:28:42.148 real 0m9.385s 00:28:42.148 user 0m17.901s 00:28:42.148 sys 0m0.312s 00:28:42.148 10:54:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.148 10:54:08 -- common/autotest_common.sh@10 -- # set +x 00:28:42.148 ************************************ 00:28:42.148 END TEST bdev_verify 00:28:42.148 ************************************ 00:28:42.148 10:54:08 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:42.148 10:54:08 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:42.148 10:54:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:42.148 10:54:08 -- common/autotest_common.sh@10 -- # set +x 00:28:42.148 ************************************ 00:28:42.148 START TEST bdev_verify_big_io 00:28:42.148 ************************************ 00:28:42.148 10:54:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:42.148 [2024-07-24 10:54:08.123424] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:42.148 [2024-07-24 10:54:08.123697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148416 ] 00:28:42.148 [2024-07-24 10:54:08.272427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:42.148 [2024-07-24 10:54:08.362807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.148 [2024-07-24 10:54:08.362820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.148 Running I/O for 5 seconds... 00:28:47.412 00:28:47.413 Latency(us) 00:28:47.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:47.413 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:47.413 Verification LBA range: start 0x0 length 0x4ff8 00:28:47.413 Nvme0n1p1 : 5.11 1069.65 66.85 0.00 0.00 117869.69 20018.27 237359.48 00:28:47.413 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:47.413 Verification LBA range: start 0x4ff8 length 0x4ff8 00:28:47.413 Nvme0n1p1 : 5.11 993.85 62.12 0.00 0.00 126851.01 19779.96 237359.48 00:28:47.413 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:47.413 Verification LBA range: start 0x0 length 0x4ff7 00:28:47.413 Nvme0n1p2 : 5.12 1084.80 67.80 0.00 0.00 114952.30 1347.96 176351.42 00:28:47.413 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:47.413 Verification LBA range: start 0x4ff7 length 0x4ff7 00:28:47.413 Nvme0n1p2 : 5.12 1008.89 63.06 0.00 0.00 123533.40 990.49 179211.17 00:28:47.413 =================================================================================================================== 00:28:47.413 Total : 4157.19 259.82 0.00 0.00 120629.32 990.49 237359.48 00:28:47.670 00:28:47.670 real 0m6.182s 00:28:47.670 user 0m11.591s 00:28:47.670 sys 0m0.245s 00:28:47.670 ************************************ 00:28:47.670 END TEST bdev_verify_big_io 00:28:47.670 ************************************ 00:28:47.670 10:54:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.670 10:54:14 -- common/autotest_common.sh@10 -- # set +x 00:28:47.670 10:54:14 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.670 10:54:14 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:47.670 10:54:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.670 10:54:14 -- common/autotest_common.sh@10 -- # set +x 00:28:47.670 ************************************ 00:28:47.670 START TEST bdev_write_zeroes 00:28:47.670 ************************************ 00:28:47.670 10:54:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.928 [2024-07-24 10:54:14.366148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:47.928 [2024-07-24 10:54:14.366358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148505 ] 00:28:47.928 [2024-07-24 10:54:14.513412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.928 [2024-07-24 10:54:14.581489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.192 Running I/O for 1 seconds... 00:28:49.564 00:28:49.564 Latency(us) 00:28:49.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.564 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.564 Nvme0n1p1 : 1.01 25751.16 100.59 0.00 0.00 4958.60 2308.65 20018.27 00:28:49.564 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:49.564 Nvme0n1p2 : 1.01 25792.90 100.75 0.00 0.00 4942.23 2398.02 14239.19 00:28:49.564 =================================================================================================================== 00:28:49.564 Total : 51544.06 201.34 0.00 0.00 4950.40 2308.65 20018.27 00:28:49.564 00:28:49.564 real 0m1.775s 00:28:49.564 user 0m1.504s 00:28:49.564 sys 0m0.172s 00:28:49.564 ************************************ 00:28:49.564 END TEST bdev_write_zeroes 00:28:49.564 ************************************ 00:28:49.564 10:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.564 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:49.564 10:54:16 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:49.564 10:54:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:49.564 10:54:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:49.564 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:49.564 ************************************ 00:28:49.564 START TEST bdev_json_nonenclosed 00:28:49.564 ************************************ 00:28:49.564 10:54:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:49.564 [2024-07-24 10:54:16.195477] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:49.564 [2024-07-24 10:54:16.195753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148556 ] 00:28:49.823 [2024-07-24 10:54:16.343163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.823 [2024-07-24 10:54:16.417104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.823 [2024-07-24 10:54:16.417349] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:49.823 [2024-07-24 10:54:16.417396] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:50.082 00:28:50.082 real 0m0.388s 00:28:50.082 user 0m0.173s 00:28:50.082 sys 0m0.115s 00:28:50.082 ************************************ 00:28:50.082 END TEST bdev_json_nonenclosed 00:28:50.082 ************************************ 00:28:50.082 10:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.082 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:50.082 10:54:16 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:50.082 10:54:16 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:50.082 10:54:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.082 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:50.082 ************************************ 00:28:50.082 START TEST bdev_json_nonarray 00:28:50.082 ************************************ 00:28:50.082 10:54:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:50.082 [2024-07-24 10:54:16.632810] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:50.082 [2024-07-24 10:54:16.633023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148578 ] 00:28:50.340 [2024-07-24 10:54:16.782378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.340 [2024-07-24 10:54:16.862718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.340 [2024-07-24 10:54:16.862999] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:50.340 [2024-07-24 10:54:16.863065] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:50.340 00:28:50.340 real 0m0.400s 00:28:50.340 user 0m0.196s 00:28:50.340 sys 0m0.104s 00:28:50.340 ************************************ 00:28:50.340 END TEST bdev_json_nonarray 00:28:50.340 ************************************ 00:28:50.340 10:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.340 10:54:16 -- common/autotest_common.sh@10 -- # set +x 00:28:50.340 10:54:17 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:28:50.340 10:54:17 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:28:50.340 10:54:17 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:28:50.340 10:54:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:50.340 10:54:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.340 10:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:50.599 ************************************ 00:28:50.599 START TEST bdev_gpt_uuid 00:28:50.599 ************************************ 00:28:50.599 10:54:17 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:28:50.599 10:54:17 -- bdev/blockdev.sh@612 -- # local bdev 00:28:50.599 10:54:17 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:28:50.599 10:54:17 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:50.599 10:54:17 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=148617 00:28:50.599 10:54:17 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:50.599 10:54:17 -- bdev/blockdev.sh@47 -- # waitforlisten 148617 00:28:50.599 10:54:17 -- common/autotest_common.sh@819 -- # '[' -z 148617 ']' 00:28:50.599 10:54:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.599 10:54:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:50.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.599 10:54:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.599 10:54:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:50.599 10:54:17 -- common/autotest_common.sh@10 -- # set +x 00:28:50.599 [2024-07-24 10:54:17.094122] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:50.599 [2024-07-24 10:54:17.094720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148617 ] 00:28:50.599 [2024-07-24 10:54:17.243427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.857 [2024-07-24 10:54:17.334937] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.857 [2024-07-24 10:54:17.335203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.423 10:54:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.423 10:54:18 -- common/autotest_common.sh@852 -- # return 0 00:28:51.423 10:54:18 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:51.423 10:54:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.423 10:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:51.682 Some configs were skipped because the RPC state that can call them passed over. 00:28:51.682 10:54:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:28:51.682 10:54:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.682 10:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:51.682 10:54:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:28:51.682 10:54:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.682 10:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:51.682 10:54:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@619 -- # bdev='[ 00:28:51.682 { 00:28:51.682 "name": "Nvme0n1p1", 00:28:51.682 "aliases": [ 00:28:51.682 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:28:51.682 ], 00:28:51.682 "product_name": "GPT Disk", 00:28:51.682 "block_size": 4096, 00:28:51.682 "num_blocks": 655104, 00:28:51.682 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:51.682 "assigned_rate_limits": { 00:28:51.682 "rw_ios_per_sec": 0, 00:28:51.682 "rw_mbytes_per_sec": 0, 00:28:51.682 "r_mbytes_per_sec": 0, 00:28:51.682 "w_mbytes_per_sec": 0 00:28:51.682 }, 00:28:51.682 "claimed": false, 00:28:51.682 "zoned": false, 00:28:51.682 "supported_io_types": { 00:28:51.682 "read": true, 00:28:51.682 "write": true, 00:28:51.682 "unmap": true, 00:28:51.682 "write_zeroes": true, 00:28:51.682 "flush": true, 00:28:51.682 "reset": true, 00:28:51.682 "compare": true, 00:28:51.682 "compare_and_write": false, 00:28:51.682 "abort": true, 00:28:51.682 "nvme_admin": false, 00:28:51.682 "nvme_io": false 00:28:51.682 }, 00:28:51.682 "driver_specific": { 00:28:51.682 "gpt": { 00:28:51.682 "base_bdev": "Nvme0n1", 00:28:51.682 "offset_blocks": 256, 00:28:51.682 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:28:51.682 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:51.682 "partition_name": "SPDK_TEST_first" 00:28:51.682 } 00:28:51.682 } 00:28:51.682 } 00:28:51.682 ]' 00:28:51.682 10:54:18 -- bdev/blockdev.sh@620 -- # jq -r length 00:28:51.682 10:54:18 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:28:51.682 10:54:18 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:51.682 10:54:18 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:51.682 10:54:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.682 10:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:51.682 10:54:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.682 10:54:18 -- bdev/blockdev.sh@624 -- # bdev='[ 00:28:51.682 { 00:28:51.682 "name": "Nvme0n1p2", 00:28:51.682 "aliases": [ 00:28:51.682 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:28:51.682 ], 00:28:51.682 "product_name": "GPT Disk", 00:28:51.682 "block_size": 4096, 00:28:51.682 "num_blocks": 655103, 00:28:51.682 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:51.682 "assigned_rate_limits": { 00:28:51.682 "rw_ios_per_sec": 0, 00:28:51.682 "rw_mbytes_per_sec": 0, 00:28:51.682 "r_mbytes_per_sec": 0, 00:28:51.682 "w_mbytes_per_sec": 0 00:28:51.682 }, 00:28:51.682 "claimed": false, 00:28:51.682 "zoned": false, 00:28:51.682 "supported_io_types": { 00:28:51.682 "read": true, 00:28:51.682 "write": true, 00:28:51.682 "unmap": true, 00:28:51.682 "write_zeroes": true, 00:28:51.682 "flush": true, 00:28:51.682 "reset": true, 00:28:51.682 "compare": true, 00:28:51.682 "compare_and_write": false, 00:28:51.682 "abort": true, 00:28:51.682 "nvme_admin": false, 00:28:51.682 "nvme_io": false 00:28:51.682 }, 00:28:51.682 "driver_specific": { 00:28:51.682 "gpt": { 00:28:51.682 "base_bdev": "Nvme0n1", 00:28:51.682 "offset_blocks": 655360, 00:28:51.682 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:28:51.682 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:51.682 "partition_name": "SPDK_TEST_second" 00:28:51.682 } 00:28:51.682 } 00:28:51.682 } 00:28:51.682 ]' 00:28:51.682 10:54:18 -- bdev/blockdev.sh@625 -- # jq -r length 00:28:51.941 10:54:18 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:28:51.941 10:54:18 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:28:51.941 10:54:18 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:51.941 10:54:18 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:51.941 10:54:18 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:51.941 10:54:18 -- bdev/blockdev.sh@629 -- # killprocess 148617 00:28:51.941 10:54:18 -- common/autotest_common.sh@926 -- # '[' -z 148617 ']' 00:28:51.941 10:54:18 -- common/autotest_common.sh@930 -- # kill -0 148617 00:28:51.941 10:54:18 -- common/autotest_common.sh@931 -- # uname 00:28:51.941 10:54:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:51.941 10:54:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 148617 00:28:51.941 10:54:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:51.941 killing process with pid 148617 00:28:51.941 10:54:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:51.941 10:54:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 148617' 00:28:51.941 10:54:18 -- common/autotest_common.sh@945 -- # kill 148617 00:28:51.941 10:54:18 -- common/autotest_common.sh@950 -- # wait 148617 00:28:52.514 ************************************ 00:28:52.514 END TEST bdev_gpt_uuid 00:28:52.514 ************************************ 00:28:52.514 00:28:52.514 real 0m1.956s 00:28:52.514 user 0m2.265s 00:28:52.514 sys 0m0.430s 00:28:52.514 10:54:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.514 10:54:18 -- common/autotest_common.sh@10 -- # set +x 00:28:52.514 10:54:19 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:28:52.514 10:54:19 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:52.514 10:54:19 -- bdev/blockdev.sh@809 -- # cleanup 00:28:52.514 10:54:19 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:52.514 10:54:19 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:52.514 10:54:19 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:28:52.514 10:54:19 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:28:52.514 10:54:19 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:28:52.514 10:54:19 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:52.772 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:52.772 Waiting for block devices as requested 00:28:52.772 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:53.030 10:54:19 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:28:53.030 10:54:19 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:28:53.030 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:28:53.031 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:28:53.031 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:53.031 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:53.031 10:54:19 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:28:53.031 00:28:53.031 real 0m35.753s 00:28:53.031 user 0m54.845s 00:28:53.031 sys 0m5.704s 00:28:53.031 10:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.031 ************************************ 00:28:53.031 10:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:53.031 END TEST blockdev_nvme_gpt 00:28:53.031 ************************************ 00:28:53.031 10:54:19 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:53.031 10:54:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:53.031 10:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:53.031 10:54:19 -- common/autotest_common.sh@10 -- # set +x 00:28:53.031 ************************************ 00:28:53.031 START TEST nvme 00:28:53.031 ************************************ 00:28:53.031 10:54:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:53.031 * Looking for test storage... 00:28:53.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:53.031 10:54:19 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:53.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:53.597 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:54.533 10:54:21 -- nvme/nvme.sh@79 -- # uname 00:28:54.533 10:54:21 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:28:54.533 10:54:21 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:28:54.533 10:54:21 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:28:54.533 10:54:21 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:28:54.533 10:54:21 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:28:54.533 10:54:21 -- common/autotest_common.sh@1045 -- # echo 0 00:28:54.533 10:54:21 -- common/autotest_common.sh@1047 -- # stubpid=149007 00:28:54.533 10:54:21 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:28:54.533 Waiting for stub to ready for secondary processes... 00:28:54.533 10:54:21 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:28:54.533 10:54:21 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:54.533 10:54:21 -- common/autotest_common.sh@1051 -- # [[ -e /proc/149007 ]] 00:28:54.533 10:54:21 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:54.792 [2024-07-24 10:54:21.243469] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:54.792 [2024-07-24 10:54:21.243741] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.728 10:54:22 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:55.728 10:54:22 -- common/autotest_common.sh@1051 -- # [[ -e /proc/149007 ]] 00:28:55.728 10:54:22 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:55.987 [2024-07-24 10:54:22.562162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:55.987 [2024-07-24 10:54:22.632601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:55.987 [2024-07-24 10:54:22.632758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:55.987 [2024-07-24 10:54:22.632760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.987 [2024-07-24 10:54:22.642890] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:28:55.987 [2024-07-24 10:54:22.655102] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:28:55.987 [2024-07-24 10:54:22.656345] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:28:56.553 10:54:23 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:56.553 done. 00:28:56.553 10:54:23 -- common/autotest_common.sh@1054 -- # echo done. 00:28:56.553 10:54:23 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:56.553 10:54:23 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:28:56.553 10:54:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:56.553 10:54:23 -- common/autotest_common.sh@10 -- # set +x 00:28:56.553 ************************************ 00:28:56.553 START TEST nvme_reset 00:28:56.553 ************************************ 00:28:56.553 10:54:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:56.811 Initializing NVMe Controllers 00:28:56.811 Skipping QEMU NVMe SSD at 0000:00:06.0 00:28:56.811 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:28:57.069 00:28:57.069 real 0m0.270s 00:28:57.069 user 0m0.116s 00:28:57.069 sys 0m0.087s 00:28:57.069 10:54:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.069 ************************************ 00:28:57.069 END TEST nvme_reset 00:28:57.069 ************************************ 00:28:57.069 10:54:23 -- common/autotest_common.sh@10 -- # set +x 00:28:57.069 10:54:23 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:28:57.069 10:54:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:57.069 10:54:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:57.069 10:54:23 -- common/autotest_common.sh@10 -- # set +x 00:28:57.069 ************************************ 00:28:57.069 START TEST nvme_identify 00:28:57.069 ************************************ 00:28:57.069 10:54:23 -- common/autotest_common.sh@1104 -- # nvme_identify 00:28:57.069 10:54:23 -- nvme/nvme.sh@12 -- # bdfs=() 00:28:57.069 10:54:23 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:28:57.069 10:54:23 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:28:57.069 10:54:23 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:28:57.069 10:54:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:57.069 10:54:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:57.069 10:54:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:57.069 10:54:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:57.069 10:54:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:57.069 10:54:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:57.069 10:54:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:28:57.069 10:54:23 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:28:57.327 [2024-07-24 10:54:23.828292] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 149041 terminated unexpected 00:28:57.327 ===================================================== 00:28:57.327 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:57.327 ===================================================== 00:28:57.327 Controller Capabilities/Features 00:28:57.327 ================================ 00:28:57.327 Vendor ID: 1b36 00:28:57.327 Subsystem Vendor ID: 1af4 00:28:57.327 Serial Number: 12340 00:28:57.327 Model Number: QEMU NVMe Ctrl 00:28:57.327 Firmware Version: 8.0.0 00:28:57.327 Recommended Arb Burst: 6 00:28:57.327 IEEE OUI Identifier: 00 54 52 00:28:57.327 Multi-path I/O 00:28:57.327 May have multiple subsystem ports: No 00:28:57.327 May have multiple controllers: No 00:28:57.327 Associated with SR-IOV VF: No 00:28:57.327 Max Data Transfer Size: 524288 00:28:57.327 Max Number of Namespaces: 256 00:28:57.327 Max Number of I/O Queues: 64 00:28:57.327 NVMe Specification Version (VS): 1.4 00:28:57.327 NVMe Specification Version (Identify): 1.4 00:28:57.327 Maximum Queue Entries: 2048 00:28:57.327 Contiguous Queues Required: Yes 00:28:57.327 Arbitration Mechanisms Supported 00:28:57.327 Weighted Round Robin: Not Supported 00:28:57.327 Vendor Specific: Not Supported 00:28:57.327 Reset Timeout: 7500 ms 00:28:57.327 Doorbell Stride: 4 bytes 00:28:57.327 NVM Subsystem Reset: Not Supported 00:28:57.327 Command Sets Supported 00:28:57.327 NVM Command Set: Supported 00:28:57.327 Boot Partition: Not Supported 00:28:57.327 Memory Page Size Minimum: 4096 bytes 00:28:57.327 Memory Page Size Maximum: 65536 bytes 00:28:57.327 Persistent Memory Region: Not Supported 00:28:57.327 Optional Asynchronous Events Supported 00:28:57.327 Namespace Attribute Notices: Supported 00:28:57.327 Firmware Activation Notices: Not Supported 00:28:57.327 ANA Change Notices: Not Supported 00:28:57.327 PLE Aggregate Log Change Notices: Not Supported 00:28:57.327 LBA Status Info Alert Notices: Not Supported 00:28:57.327 EGE Aggregate Log Change Notices: Not Supported 00:28:57.327 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.327 Zone Descriptor Change Notices: Not Supported 00:28:57.327 Discovery Log Change Notices: Not Supported 00:28:57.327 Controller Attributes 00:28:57.327 128-bit Host Identifier: Not Supported 00:28:57.327 Non-Operational Permissive Mode: Not Supported 00:28:57.327 NVM Sets: Not Supported 00:28:57.327 Read Recovery Levels: Not Supported 00:28:57.327 Endurance Groups: Not Supported 00:28:57.327 Predictable Latency Mode: Not Supported 00:28:57.327 Traffic Based Keep ALive: Not Supported 00:28:57.327 Namespace Granularity: Not Supported 00:28:57.327 SQ Associations: Not Supported 00:28:57.327 UUID List: Not Supported 00:28:57.327 Multi-Domain Subsystem: Not Supported 00:28:57.327 Fixed Capacity Management: Not Supported 00:28:57.327 Variable Capacity Management: Not Supported 00:28:57.327 Delete Endurance Group: Not Supported 00:28:57.327 Delete NVM Set: Not Supported 00:28:57.327 Extended LBA Formats Supported: Supported 00:28:57.327 Flexible Data Placement Supported: Not Supported 00:28:57.327 00:28:57.327 Controller Memory Buffer Support 00:28:57.327 ================================ 00:28:57.327 Supported: No 00:28:57.327 00:28:57.327 Persistent Memory Region Support 00:28:57.327 ================================ 00:28:57.327 Supported: No 00:28:57.327 00:28:57.327 Admin Command Set Attributes 00:28:57.327 ============================ 00:28:57.327 Security Send/Receive: Not Supported 00:28:57.327 Format NVM: Supported 00:28:57.327 Firmware Activate/Download: Not Supported 00:28:57.327 Namespace Management: Supported 00:28:57.327 Device Self-Test: Not Supported 00:28:57.327 Directives: Supported 00:28:57.327 NVMe-MI: Not Supported 00:28:57.327 Virtualization Management: Not Supported 00:28:57.327 Doorbell Buffer Config: Supported 00:28:57.327 Get LBA Status Capability: Not Supported 00:28:57.327 Command & Feature Lockdown Capability: Not Supported 00:28:57.327 Abort Command Limit: 4 00:28:57.327 Async Event Request Limit: 4 00:28:57.327 Number of Firmware Slots: N/A 00:28:57.327 Firmware Slot 1 Read-Only: N/A 00:28:57.327 Firmware Activation Without Reset: N/A 00:28:57.327 Multiple Update Detection Support: N/A 00:28:57.327 Firmware Update Granularity: No Information Provided 00:28:57.327 Per-Namespace SMART Log: Yes 00:28:57.327 Asymmetric Namespace Access Log Page: Not Supported 00:28:57.327 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:57.327 Command Effects Log Page: Supported 00:28:57.327 Get Log Page Extended Data: Supported 00:28:57.327 Telemetry Log Pages: Not Supported 00:28:57.327 Persistent Event Log Pages: Not Supported 00:28:57.327 Supported Log Pages Log Page: May Support 00:28:57.327 Commands Supported & Effects Log Page: Not Supported 00:28:57.327 Feature Identifiers & Effects Log Page:May Support 00:28:57.327 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.327 Data Area 4 for Telemetry Log: Not Supported 00:28:57.327 Error Log Page Entries Supported: 1 00:28:57.327 Keep Alive: Not Supported 00:28:57.327 00:28:57.327 NVM Command Set Attributes 00:28:57.327 ========================== 00:28:57.327 Submission Queue Entry Size 00:28:57.327 Max: 64 00:28:57.327 Min: 64 00:28:57.327 Completion Queue Entry Size 00:28:57.327 Max: 16 00:28:57.327 Min: 16 00:28:57.327 Number of Namespaces: 256 00:28:57.327 Compare Command: Supported 00:28:57.327 Write Uncorrectable Command: Not Supported 00:28:57.327 Dataset Management Command: Supported 00:28:57.327 Write Zeroes Command: Supported 00:28:57.327 Set Features Save Field: Supported 00:28:57.327 Reservations: Not Supported 00:28:57.327 Timestamp: Supported 00:28:57.327 Copy: Supported 00:28:57.327 Volatile Write Cache: Present 00:28:57.327 Atomic Write Unit (Normal): 1 00:28:57.327 Atomic Write Unit (PFail): 1 00:28:57.327 Atomic Compare & Write Unit: 1 00:28:57.327 Fused Compare & Write: Not Supported 00:28:57.327 Scatter-Gather List 00:28:57.327 SGL Command Set: Supported 00:28:57.327 SGL Keyed: Not Supported 00:28:57.327 SGL Bit Bucket Descriptor: Not Supported 00:28:57.327 SGL Metadata Pointer: Not Supported 00:28:57.327 Oversized SGL: Not Supported 00:28:57.327 SGL Metadata Address: Not Supported 00:28:57.327 SGL Offset: Not Supported 00:28:57.327 Transport SGL Data Block: Not Supported 00:28:57.327 Replay Protected Memory Block: Not Supported 00:28:57.327 00:28:57.327 Firmware Slot Information 00:28:57.327 ========================= 00:28:57.327 Active slot: 1 00:28:57.327 Slot 1 Firmware Revision: 1.0 00:28:57.327 00:28:57.327 00:28:57.328 Commands Supported and Effects 00:28:57.328 ============================== 00:28:57.328 Admin Commands 00:28:57.328 -------------- 00:28:57.328 Delete I/O Submission Queue (00h): Supported 00:28:57.328 Create I/O Submission Queue (01h): Supported 00:28:57.328 Get Log Page (02h): Supported 00:28:57.328 Delete I/O Completion Queue (04h): Supported 00:28:57.328 Create I/O Completion Queue (05h): Supported 00:28:57.328 Identify (06h): Supported 00:28:57.328 Abort (08h): Supported 00:28:57.328 Set Features (09h): Supported 00:28:57.328 Get Features (0Ah): Supported 00:28:57.328 Asynchronous Event Request (0Ch): Supported 00:28:57.328 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:57.328 Directive Send (19h): Supported 00:28:57.328 Directive Receive (1Ah): Supported 00:28:57.328 Virtualization Management (1Ch): Supported 00:28:57.328 Doorbell Buffer Config (7Ch): Supported 00:28:57.328 Format NVM (80h): Supported LBA-Change 00:28:57.328 I/O Commands 00:28:57.328 ------------ 00:28:57.328 Flush (00h): Supported LBA-Change 00:28:57.328 Write (01h): Supported LBA-Change 00:28:57.328 Read (02h): Supported 00:28:57.328 Compare (05h): Supported 00:28:57.328 Write Zeroes (08h): Supported LBA-Change 00:28:57.328 Dataset Management (09h): Supported LBA-Change 00:28:57.328 Unknown (0Ch): Supported 00:28:57.328 Unknown (12h): Supported 00:28:57.328 Copy (19h): Supported LBA-Change 00:28:57.328 Unknown (1Dh): Supported LBA-Change 00:28:57.328 00:28:57.328 Error Log 00:28:57.328 ========= 00:28:57.328 00:28:57.328 Arbitration 00:28:57.328 =========== 00:28:57.328 Arbitration Burst: no limit 00:28:57.328 00:28:57.328 Power Management 00:28:57.328 ================ 00:28:57.328 Number of Power States: 1 00:28:57.328 Current Power State: Power State #0 00:28:57.328 Power State #0: 00:28:57.328 Max Power: 25.00 W 00:28:57.328 Non-Operational State: Operational 00:28:57.328 Entry Latency: 16 microseconds 00:28:57.328 Exit Latency: 4 microseconds 00:28:57.328 Relative Read Throughput: 0 00:28:57.328 Relative Read Latency: 0 00:28:57.328 Relative Write Throughput: 0 00:28:57.328 Relative Write Latency: 0 00:28:57.328 Idle Power: Not Reported 00:28:57.328 Active Power: Not Reported 00:28:57.328 Non-Operational Permissive Mode: Not Supported 00:28:57.328 00:28:57.328 Health Information 00:28:57.328 ================== 00:28:57.328 Critical Warnings: 00:28:57.328 Available Spare Space: OK 00:28:57.328 Temperature: OK 00:28:57.328 Device Reliability: OK 00:28:57.328 Read Only: No 00:28:57.328 Volatile Memory Backup: OK 00:28:57.328 Current Temperature: 323 Kelvin (50 Celsius) 00:28:57.328 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:57.328 Available Spare: 0% 00:28:57.328 Available Spare Threshold: 0% 00:28:57.328 Life Percentage Used: 0% 00:28:57.328 Data Units Read: 7834 00:28:57.328 Data Units Written: 3828 00:28:57.328 Host Read Commands: 362561 00:28:57.328 Host Write Commands: 196599 00:28:57.328 Controller Busy Time: 0 minutes 00:28:57.328 Power Cycles: 0 00:28:57.328 Power On Hours: 0 hours 00:28:57.328 Unsafe Shutdowns: 0 00:28:57.328 Unrecoverable Media Errors: 0 00:28:57.328 Lifetime Error Log Entries: 0 00:28:57.328 Warning Temperature Time: 0 minutes 00:28:57.328 Critical Temperature Time: 0 minutes 00:28:57.328 00:28:57.328 Number of Queues 00:28:57.328 ================ 00:28:57.328 Number of I/O Submission Queues: 64 00:28:57.328 Number of I/O Completion Queues: 64 00:28:57.328 00:28:57.328 ZNS Specific Controller Data 00:28:57.328 ============================ 00:28:57.328 Zone Append Size Limit: 0 00:28:57.328 00:28:57.328 00:28:57.328 Active Namespaces 00:28:57.328 ================= 00:28:57.328 Namespace ID:1 00:28:57.328 Error Recovery Timeout: Unlimited 00:28:57.328 Command Set Identifier: NVM (00h) 00:28:57.328 Deallocate: Supported 00:28:57.328 Deallocated/Unwritten Error: Supported 00:28:57.328 Deallocated Read Value: All 0x00 00:28:57.328 Deallocate in Write Zeroes: Not Supported 00:28:57.328 Deallocated Guard Field: 0xFFFF 00:28:57.328 Flush: Supported 00:28:57.328 Reservation: Not Supported 00:28:57.328 Namespace Sharing Capabilities: Private 00:28:57.328 Size (in LBAs): 1310720 (5GiB) 00:28:57.328 Capacity (in LBAs): 1310720 (5GiB) 00:28:57.328 Utilization (in LBAs): 1310720 (5GiB) 00:28:57.328 Thin Provisioning: Not Supported 00:28:57.328 Per-NS Atomic Units: No 00:28:57.328 Maximum Single Source Range Length: 128 00:28:57.328 Maximum Copy Length: 128 00:28:57.328 Maximum Source Range Count: 128 00:28:57.328 NGUID/EUI64 Never Reused: No 00:28:57.328 Namespace Write Protected: No 00:28:57.328 Number of LBA Formats: 8 00:28:57.328 Current LBA Format: LBA Format #04 00:28:57.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:57.328 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:57.328 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:57.328 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:57.328 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:57.328 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:57.328 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:57.328 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:57.328 00:28:57.328 10:54:23 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:28:57.328 10:54:23 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:28:57.586 ===================================================== 00:28:57.586 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:57.586 ===================================================== 00:28:57.586 Controller Capabilities/Features 00:28:57.586 ================================ 00:28:57.586 Vendor ID: 1b36 00:28:57.586 Subsystem Vendor ID: 1af4 00:28:57.586 Serial Number: 12340 00:28:57.586 Model Number: QEMU NVMe Ctrl 00:28:57.586 Firmware Version: 8.0.0 00:28:57.586 Recommended Arb Burst: 6 00:28:57.586 IEEE OUI Identifier: 00 54 52 00:28:57.586 Multi-path I/O 00:28:57.587 May have multiple subsystem ports: No 00:28:57.587 May have multiple controllers: No 00:28:57.587 Associated with SR-IOV VF: No 00:28:57.587 Max Data Transfer Size: 524288 00:28:57.587 Max Number of Namespaces: 256 00:28:57.587 Max Number of I/O Queues: 64 00:28:57.587 NVMe Specification Version (VS): 1.4 00:28:57.587 NVMe Specification Version (Identify): 1.4 00:28:57.587 Maximum Queue Entries: 2048 00:28:57.587 Contiguous Queues Required: Yes 00:28:57.587 Arbitration Mechanisms Supported 00:28:57.587 Weighted Round Robin: Not Supported 00:28:57.587 Vendor Specific: Not Supported 00:28:57.587 Reset Timeout: 7500 ms 00:28:57.587 Doorbell Stride: 4 bytes 00:28:57.587 NVM Subsystem Reset: Not Supported 00:28:57.587 Command Sets Supported 00:28:57.587 NVM Command Set: Supported 00:28:57.587 Boot Partition: Not Supported 00:28:57.587 Memory Page Size Minimum: 4096 bytes 00:28:57.587 Memory Page Size Maximum: 65536 bytes 00:28:57.587 Persistent Memory Region: Not Supported 00:28:57.587 Optional Asynchronous Events Supported 00:28:57.587 Namespace Attribute Notices: Supported 00:28:57.587 Firmware Activation Notices: Not Supported 00:28:57.587 ANA Change Notices: Not Supported 00:28:57.587 PLE Aggregate Log Change Notices: Not Supported 00:28:57.587 LBA Status Info Alert Notices: Not Supported 00:28:57.587 EGE Aggregate Log Change Notices: Not Supported 00:28:57.587 Normal NVM Subsystem Shutdown event: Not Supported 00:28:57.587 Zone Descriptor Change Notices: Not Supported 00:28:57.587 Discovery Log Change Notices: Not Supported 00:28:57.587 Controller Attributes 00:28:57.587 128-bit Host Identifier: Not Supported 00:28:57.587 Non-Operational Permissive Mode: Not Supported 00:28:57.587 NVM Sets: Not Supported 00:28:57.587 Read Recovery Levels: Not Supported 00:28:57.587 Endurance Groups: Not Supported 00:28:57.587 Predictable Latency Mode: Not Supported 00:28:57.587 Traffic Based Keep ALive: Not Supported 00:28:57.587 Namespace Granularity: Not Supported 00:28:57.587 SQ Associations: Not Supported 00:28:57.587 UUID List: Not Supported 00:28:57.587 Multi-Domain Subsystem: Not Supported 00:28:57.587 Fixed Capacity Management: Not Supported 00:28:57.587 Variable Capacity Management: Not Supported 00:28:57.587 Delete Endurance Group: Not Supported 00:28:57.587 Delete NVM Set: Not Supported 00:28:57.587 Extended LBA Formats Supported: Supported 00:28:57.587 Flexible Data Placement Supported: Not Supported 00:28:57.587 00:28:57.587 Controller Memory Buffer Support 00:28:57.587 ================================ 00:28:57.587 Supported: No 00:28:57.587 00:28:57.587 Persistent Memory Region Support 00:28:57.587 ================================ 00:28:57.587 Supported: No 00:28:57.587 00:28:57.587 Admin Command Set Attributes 00:28:57.587 ============================ 00:28:57.587 Security Send/Receive: Not Supported 00:28:57.587 Format NVM: Supported 00:28:57.587 Firmware Activate/Download: Not Supported 00:28:57.587 Namespace Management: Supported 00:28:57.587 Device Self-Test: Not Supported 00:28:57.587 Directives: Supported 00:28:57.587 NVMe-MI: Not Supported 00:28:57.587 Virtualization Management: Not Supported 00:28:57.587 Doorbell Buffer Config: Supported 00:28:57.587 Get LBA Status Capability: Not Supported 00:28:57.587 Command & Feature Lockdown Capability: Not Supported 00:28:57.587 Abort Command Limit: 4 00:28:57.587 Async Event Request Limit: 4 00:28:57.587 Number of Firmware Slots: N/A 00:28:57.587 Firmware Slot 1 Read-Only: N/A 00:28:57.587 Firmware Activation Without Reset: N/A 00:28:57.587 Multiple Update Detection Support: N/A 00:28:57.587 Firmware Update Granularity: No Information Provided 00:28:57.587 Per-Namespace SMART Log: Yes 00:28:57.587 Asymmetric Namespace Access Log Page: Not Supported 00:28:57.587 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:57.587 Command Effects Log Page: Supported 00:28:57.587 Get Log Page Extended Data: Supported 00:28:57.587 Telemetry Log Pages: Not Supported 00:28:57.587 Persistent Event Log Pages: Not Supported 00:28:57.587 Supported Log Pages Log Page: May Support 00:28:57.587 Commands Supported & Effects Log Page: Not Supported 00:28:57.587 Feature Identifiers & Effects Log Page:May Support 00:28:57.587 NVMe-MI Commands & Effects Log Page: May Support 00:28:57.587 Data Area 4 for Telemetry Log: Not Supported 00:28:57.587 Error Log Page Entries Supported: 1 00:28:57.587 Keep Alive: Not Supported 00:28:57.587 00:28:57.587 NVM Command Set Attributes 00:28:57.587 ========================== 00:28:57.587 Submission Queue Entry Size 00:28:57.587 Max: 64 00:28:57.587 Min: 64 00:28:57.587 Completion Queue Entry Size 00:28:57.587 Max: 16 00:28:57.587 Min: 16 00:28:57.587 Number of Namespaces: 256 00:28:57.587 Compare Command: Supported 00:28:57.587 Write Uncorrectable Command: Not Supported 00:28:57.587 Dataset Management Command: Supported 00:28:57.587 Write Zeroes Command: Supported 00:28:57.587 Set Features Save Field: Supported 00:28:57.587 Reservations: Not Supported 00:28:57.587 Timestamp: Supported 00:28:57.587 Copy: Supported 00:28:57.587 Volatile Write Cache: Present 00:28:57.587 Atomic Write Unit (Normal): 1 00:28:57.587 Atomic Write Unit (PFail): 1 00:28:57.587 Atomic Compare & Write Unit: 1 00:28:57.587 Fused Compare & Write: Not Supported 00:28:57.587 Scatter-Gather List 00:28:57.587 SGL Command Set: Supported 00:28:57.587 SGL Keyed: Not Supported 00:28:57.587 SGL Bit Bucket Descriptor: Not Supported 00:28:57.587 SGL Metadata Pointer: Not Supported 00:28:57.587 Oversized SGL: Not Supported 00:28:57.587 SGL Metadata Address: Not Supported 00:28:57.587 SGL Offset: Not Supported 00:28:57.587 Transport SGL Data Block: Not Supported 00:28:57.587 Replay Protected Memory Block: Not Supported 00:28:57.587 00:28:57.587 Firmware Slot Information 00:28:57.587 ========================= 00:28:57.587 Active slot: 1 00:28:57.587 Slot 1 Firmware Revision: 1.0 00:28:57.587 00:28:57.587 00:28:57.587 Commands Supported and Effects 00:28:57.587 ============================== 00:28:57.587 Admin Commands 00:28:57.587 -------------- 00:28:57.587 Delete I/O Submission Queue (00h): Supported 00:28:57.587 Create I/O Submission Queue (01h): Supported 00:28:57.587 Get Log Page (02h): Supported 00:28:57.587 Delete I/O Completion Queue (04h): Supported 00:28:57.587 Create I/O Completion Queue (05h): Supported 00:28:57.587 Identify (06h): Supported 00:28:57.587 Abort (08h): Supported 00:28:57.587 Set Features (09h): Supported 00:28:57.587 Get Features (0Ah): Supported 00:28:57.587 Asynchronous Event Request (0Ch): Supported 00:28:57.587 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:57.587 Directive Send (19h): Supported 00:28:57.587 Directive Receive (1Ah): Supported 00:28:57.587 Virtualization Management (1Ch): Supported 00:28:57.587 Doorbell Buffer Config (7Ch): Supported 00:28:57.587 Format NVM (80h): Supported LBA-Change 00:28:57.587 I/O Commands 00:28:57.587 ------------ 00:28:57.587 Flush (00h): Supported LBA-Change 00:28:57.587 Write (01h): Supported LBA-Change 00:28:57.587 Read (02h): Supported 00:28:57.587 Compare (05h): Supported 00:28:57.587 Write Zeroes (08h): Supported LBA-Change 00:28:57.587 Dataset Management (09h): Supported LBA-Change 00:28:57.587 Unknown (0Ch): Supported 00:28:57.587 Unknown (12h): Supported 00:28:57.587 Copy (19h): Supported LBA-Change 00:28:57.587 Unknown (1Dh): Supported LBA-Change 00:28:57.587 00:28:57.587 Error Log 00:28:57.587 ========= 00:28:57.587 00:28:57.587 Arbitration 00:28:57.587 =========== 00:28:57.587 Arbitration Burst: no limit 00:28:57.587 00:28:57.587 Power Management 00:28:57.587 ================ 00:28:57.587 Number of Power States: 1 00:28:57.587 Current Power State: Power State #0 00:28:57.587 Power State #0: 00:28:57.587 Max Power: 25.00 W 00:28:57.587 Non-Operational State: Operational 00:28:57.587 Entry Latency: 16 microseconds 00:28:57.587 Exit Latency: 4 microseconds 00:28:57.587 Relative Read Throughput: 0 00:28:57.587 Relative Read Latency: 0 00:28:57.587 Relative Write Throughput: 0 00:28:57.587 Relative Write Latency: 0 00:28:57.587 Idle Power: Not Reported 00:28:57.587 Active Power: Not Reported 00:28:57.587 Non-Operational Permissive Mode: Not Supported 00:28:57.587 00:28:57.587 Health Information 00:28:57.587 ================== 00:28:57.587 Critical Warnings: 00:28:57.587 Available Spare Space: OK 00:28:57.587 Temperature: OK 00:28:57.587 Device Reliability: OK 00:28:57.587 Read Only: No 00:28:57.587 Volatile Memory Backup: OK 00:28:57.587 Current Temperature: 323 Kelvin (50 Celsius) 00:28:57.588 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:57.588 Available Spare: 0% 00:28:57.588 Available Spare Threshold: 0% 00:28:57.588 Life Percentage Used: 0% 00:28:57.588 Data Units Read: 7834 00:28:57.588 Data Units Written: 3828 00:28:57.588 Host Read Commands: 362561 00:28:57.588 Host Write Commands: 196599 00:28:57.588 Controller Busy Time: 0 minutes 00:28:57.588 Power Cycles: 0 00:28:57.588 Power On Hours: 0 hours 00:28:57.588 Unsafe Shutdowns: 0 00:28:57.588 Unrecoverable Media Errors: 0 00:28:57.588 Lifetime Error Log Entries: 0 00:28:57.588 Warning Temperature Time: 0 minutes 00:28:57.588 Critical Temperature Time: 0 minutes 00:28:57.588 00:28:57.588 Number of Queues 00:28:57.588 ================ 00:28:57.588 Number of I/O Submission Queues: 64 00:28:57.588 Number of I/O Completion Queues: 64 00:28:57.588 00:28:57.588 ZNS Specific Controller Data 00:28:57.588 ============================ 00:28:57.588 Zone Append Size Limit: 0 00:28:57.588 00:28:57.588 00:28:57.588 Active Namespaces 00:28:57.588 ================= 00:28:57.588 Namespace ID:1 00:28:57.588 Error Recovery Timeout: Unlimited 00:28:57.588 Command Set Identifier: NVM (00h) 00:28:57.588 Deallocate: Supported 00:28:57.588 Deallocated/Unwritten Error: Supported 00:28:57.588 Deallocated Read Value: All 0x00 00:28:57.588 Deallocate in Write Zeroes: Not Supported 00:28:57.588 Deallocated Guard Field: 0xFFFF 00:28:57.588 Flush: Supported 00:28:57.588 Reservation: Not Supported 00:28:57.588 Namespace Sharing Capabilities: Private 00:28:57.588 Size (in LBAs): 1310720 (5GiB) 00:28:57.588 Capacity (in LBAs): 1310720 (5GiB) 00:28:57.588 Utilization (in LBAs): 1310720 (5GiB) 00:28:57.588 Thin Provisioning: Not Supported 00:28:57.588 Per-NS Atomic Units: No 00:28:57.588 Maximum Single Source Range Length: 128 00:28:57.588 Maximum Copy Length: 128 00:28:57.588 Maximum Source Range Count: 128 00:28:57.588 NGUID/EUI64 Never Reused: No 00:28:57.588 Namespace Write Protected: No 00:28:57.588 Number of LBA Formats: 8 00:28:57.588 Current LBA Format: LBA Format #04 00:28:57.588 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:57.588 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:57.588 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:57.588 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:57.588 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:57.588 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:57.588 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:57.588 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:57.588 00:28:57.588 00:28:57.588 real 0m0.621s 00:28:57.588 user 0m0.289s 00:28:57.588 sys 0m0.230s 00:28:57.588 10:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.588 10:54:24 -- common/autotest_common.sh@10 -- # set +x 00:28:57.588 ************************************ 00:28:57.588 END TEST nvme_identify 00:28:57.588 ************************************ 00:28:57.588 10:54:24 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:28:57.588 10:54:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:57.588 10:54:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:57.588 10:54:24 -- common/autotest_common.sh@10 -- # set +x 00:28:57.588 ************************************ 00:28:57.588 START TEST nvme_perf 00:28:57.588 ************************************ 00:28:57.588 10:54:24 -- common/autotest_common.sh@1104 -- # nvme_perf 00:28:57.588 10:54:24 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:28:58.965 Initializing NVMe Controllers 00:28:58.965 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:58.965 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:58.965 Initialization complete. Launching workers. 00:28:58.965 ======================================================== 00:28:58.965 Latency(us) 00:28:58.965 Device Information : IOPS MiB/s Average min max 00:28:58.965 PCIE (0000:00:06.0) NSID 1 from core 0: 52480.00 615.00 2437.78 1275.07 5458.69 00:28:58.965 ======================================================== 00:28:58.965 Total : 52480.00 615.00 2437.78 1275.07 5458.69 00:28:58.965 00:28:58.965 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:58.965 ================================================================================= 00:28:58.965 1.00000% : 1437.324us 00:28:58.965 10.00000% : 1668.189us 00:28:58.965 25.00000% : 1951.185us 00:28:58.965 50.00000% : 2427.811us 00:28:58.965 75.00000% : 2904.436us 00:28:58.965 90.00000% : 3202.327us 00:28:58.965 95.00000% : 3366.167us 00:28:58.965 98.00000% : 3574.691us 00:28:58.965 99.00000% : 3678.953us 00:28:58.965 99.50000% : 3768.320us 00:28:58.965 99.90000% : 4468.364us 00:28:58.965 99.99000% : 5332.247us 00:28:58.965 99.99900% : 5481.193us 00:28:58.965 99.99990% : 5481.193us 00:28:58.965 99.99999% : 5481.193us 00:28:58.965 00:28:58.965 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:58.965 ============================================================================== 00:28:58.965 Range in us Cumulative IO count 00:28:58.965 1273.484 - 1280.931: 0.0019% ( 1) 00:28:58.965 1288.378 - 1295.825: 0.0057% ( 2) 00:28:58.965 1295.825 - 1303.273: 0.0076% ( 1) 00:28:58.965 1303.273 - 1310.720: 0.0152% ( 4) 00:28:58.965 1310.720 - 1318.167: 0.0229% ( 4) 00:28:58.965 1318.167 - 1325.615: 0.0324% ( 5) 00:28:58.965 1325.615 - 1333.062: 0.0514% ( 10) 00:28:58.965 1333.062 - 1340.509: 0.0629% ( 6) 00:28:58.965 1340.509 - 1347.956: 0.0896% ( 14) 00:28:58.965 1347.956 - 1355.404: 0.1181% ( 15) 00:28:58.965 1355.404 - 1362.851: 0.1601% ( 22) 00:28:58.965 1362.851 - 1370.298: 0.1905% ( 16) 00:28:58.965 1370.298 - 1377.745: 0.2630% ( 38) 00:28:58.965 1377.745 - 1385.193: 0.3239% ( 32) 00:28:58.965 1385.193 - 1392.640: 0.4059% ( 43) 00:28:58.965 1392.640 - 1400.087: 0.4916% ( 45) 00:28:58.965 1400.087 - 1407.535: 0.5793% ( 46) 00:28:58.965 1407.535 - 1414.982: 0.6955% ( 61) 00:28:58.965 1414.982 - 1422.429: 0.8060% ( 58) 00:28:58.965 1422.429 - 1429.876: 0.9413% ( 71) 00:28:58.965 1429.876 - 1437.324: 1.0785% ( 72) 00:28:58.965 1437.324 - 1444.771: 1.2633% ( 97) 00:28:58.965 1444.771 - 1452.218: 1.4291% ( 87) 00:28:58.965 1452.218 - 1459.665: 1.6159% ( 98) 00:28:58.965 1459.665 - 1467.113: 1.8331% ( 114) 00:28:58.965 1467.113 - 1474.560: 2.0389% ( 108) 00:28:58.965 1474.560 - 1482.007: 2.2256% ( 98) 00:28:58.965 1482.007 - 1489.455: 2.4486% ( 117) 00:28:58.965 1489.455 - 1496.902: 2.7058% ( 135) 00:28:58.965 1496.902 - 1504.349: 2.9840% ( 146) 00:28:58.965 1504.349 - 1511.796: 3.2412% ( 135) 00:28:58.965 1511.796 - 1519.244: 3.5118% ( 142) 00:28:58.965 1519.244 - 1526.691: 3.7919% ( 147) 00:28:58.965 1526.691 - 1534.138: 4.0949% ( 159) 00:28:58.965 1534.138 - 1541.585: 4.3788% ( 149) 00:28:58.965 1541.585 - 1549.033: 4.6951% ( 166) 00:28:58.965 1549.033 - 1556.480: 5.0133% ( 167) 00:28:58.965 1556.480 - 1563.927: 5.3373% ( 170) 00:28:58.965 1563.927 - 1571.375: 5.6860% ( 183) 00:28:58.965 1571.375 - 1578.822: 5.9661% ( 147) 00:28:58.965 1578.822 - 1586.269: 6.3472% ( 200) 00:28:58.965 1586.269 - 1593.716: 6.6921% ( 181) 00:28:58.965 1593.716 - 1601.164: 7.0560% ( 191) 00:28:58.965 1601.164 - 1608.611: 7.4257% ( 194) 00:28:58.965 1608.611 - 1616.058: 7.7706% ( 181) 00:28:58.965 1616.058 - 1623.505: 8.1345% ( 191) 00:28:58.965 1623.505 - 1630.953: 8.5099% ( 197) 00:28:58.965 1630.953 - 1638.400: 8.8643% ( 186) 00:28:58.965 1638.400 - 1645.847: 9.2321% ( 193) 00:28:58.965 1645.847 - 1653.295: 9.5941% ( 190) 00:28:58.965 1653.295 - 1660.742: 9.9848% ( 205) 00:28:58.965 1660.742 - 1668.189: 10.4059% ( 221) 00:28:58.965 1668.189 - 1675.636: 10.7508% ( 181) 00:28:58.965 1675.636 - 1683.084: 11.1319% ( 200) 00:28:58.965 1683.084 - 1690.531: 11.4977% ( 192) 00:28:58.965 1690.531 - 1697.978: 11.8998% ( 211) 00:28:58.965 1697.978 - 1705.425: 12.2580% ( 188) 00:28:58.965 1705.425 - 1712.873: 12.6505% ( 206) 00:28:58.965 1712.873 - 1720.320: 13.0373% ( 203) 00:28:58.965 1720.320 - 1727.767: 13.4165% ( 199) 00:28:58.965 1727.767 - 1735.215: 13.8167% ( 210) 00:28:58.965 1735.215 - 1742.662: 14.1673% ( 184) 00:28:58.965 1742.662 - 1750.109: 14.5465% ( 199) 00:28:58.965 1750.109 - 1757.556: 14.9543% ( 214) 00:28:58.965 1757.556 - 1765.004: 15.3144% ( 189) 00:28:58.965 1765.004 - 1772.451: 15.7031% ( 204) 00:28:58.965 1772.451 - 1779.898: 16.1052% ( 211) 00:28:58.965 1779.898 - 1787.345: 16.4806% ( 197) 00:28:58.965 1787.345 - 1794.793: 16.8540% ( 196) 00:28:58.965 1794.793 - 1802.240: 17.2504% ( 208) 00:28:58.965 1802.240 - 1809.687: 17.6372% ( 203) 00:28:58.965 1809.687 - 1817.135: 18.0183% ( 200) 00:28:58.965 1817.135 - 1824.582: 18.4242% ( 213) 00:28:58.965 1824.582 - 1832.029: 18.7881% ( 191) 00:28:58.965 1832.029 - 1839.476: 19.1997% ( 216) 00:28:58.965 1839.476 - 1846.924: 19.5675% ( 193) 00:28:58.965 1846.924 - 1854.371: 19.9733% ( 213) 00:28:58.965 1854.371 - 1861.818: 20.3735% ( 210) 00:28:58.965 1861.818 - 1869.265: 20.7431% ( 194) 00:28:58.965 1869.265 - 1876.713: 21.1452% ( 211) 00:28:58.965 1876.713 - 1884.160: 21.5244% ( 199) 00:28:58.965 1884.160 - 1891.607: 21.9284% ( 212) 00:28:58.965 1891.607 - 1899.055: 22.2961% ( 193) 00:28:58.965 1899.055 - 1906.502: 22.7058% ( 215) 00:28:58.965 1906.502 - 1921.396: 23.4642% ( 398) 00:28:58.965 1921.396 - 1936.291: 24.2397% ( 407) 00:28:58.965 1936.291 - 1951.185: 25.0438% ( 422) 00:28:58.965 1951.185 - 1966.080: 25.8098% ( 402) 00:28:58.965 1966.080 - 1980.975: 26.5930% ( 411) 00:28:58.965 1980.975 - 1995.869: 27.3990% ( 423) 00:28:58.965 1995.869 - 2010.764: 28.1936% ( 417) 00:28:58.965 2010.764 - 2025.658: 29.0244% ( 436) 00:28:58.965 2025.658 - 2040.553: 29.8380% ( 427) 00:28:58.965 2040.553 - 2055.447: 30.5945% ( 397) 00:28:58.965 2055.447 - 2070.342: 31.3681% ( 406) 00:28:58.965 2070.342 - 2085.236: 32.1570% ( 414) 00:28:58.965 2085.236 - 2100.131: 32.9306% ( 406) 00:28:58.965 2100.131 - 2115.025: 33.6947% ( 401) 00:28:58.965 2115.025 - 2129.920: 34.4474% ( 395) 00:28:58.965 2129.920 - 2144.815: 35.2287% ( 410) 00:28:58.965 2144.815 - 2159.709: 35.9832% ( 396) 00:28:58.965 2159.709 - 2174.604: 36.7569% ( 406) 00:28:58.965 2174.604 - 2189.498: 37.5324% ( 407) 00:28:58.965 2189.498 - 2204.393: 38.3194% ( 413) 00:28:58.965 2204.393 - 2219.287: 39.0777% ( 398) 00:28:58.965 2219.287 - 2234.182: 39.8780% ( 420) 00:28:58.965 2234.182 - 2249.076: 40.6707% ( 416) 00:28:58.965 2249.076 - 2263.971: 41.4405% ( 404) 00:28:58.965 2263.971 - 2278.865: 42.2275% ( 413) 00:28:58.965 2278.865 - 2293.760: 42.9821% ( 396) 00:28:58.965 2293.760 - 2308.655: 43.8053% ( 432) 00:28:58.965 2308.655 - 2323.549: 44.5808% ( 407) 00:28:58.965 2323.549 - 2338.444: 45.3735% ( 416) 00:28:58.965 2338.444 - 2353.338: 46.1795% ( 423) 00:28:58.965 2353.338 - 2368.233: 46.9607% ( 410) 00:28:58.965 2368.233 - 2383.127: 47.7458% ( 412) 00:28:58.966 2383.127 - 2398.022: 48.5404% ( 417) 00:28:58.966 2398.022 - 2412.916: 49.3540% ( 427) 00:28:58.966 2412.916 - 2427.811: 50.1505% ( 418) 00:28:58.966 2427.811 - 2442.705: 50.9337% ( 411) 00:28:58.966 2442.705 - 2457.600: 51.6940% ( 399) 00:28:58.966 2457.600 - 2472.495: 52.4352% ( 389) 00:28:58.966 2472.495 - 2487.389: 53.1784% ( 390) 00:28:58.966 2487.389 - 2502.284: 53.9291% ( 394) 00:28:58.966 2502.284 - 2517.178: 54.7370% ( 424) 00:28:58.966 2517.178 - 2532.073: 55.5069% ( 404) 00:28:58.966 2532.073 - 2546.967: 56.2881% ( 410) 00:28:58.966 2546.967 - 2561.862: 57.0274% ( 388) 00:28:58.966 2561.862 - 2576.756: 57.8144% ( 413) 00:28:58.966 2576.756 - 2591.651: 58.5842% ( 404) 00:28:58.966 2591.651 - 2606.545: 59.3788% ( 417) 00:28:58.966 2606.545 - 2621.440: 60.1734% ( 417) 00:28:58.966 2621.440 - 2636.335: 60.9470% ( 406) 00:28:58.966 2636.335 - 2651.229: 61.7111% ( 401) 00:28:58.966 2651.229 - 2666.124: 62.5191% ( 424) 00:28:58.966 2666.124 - 2681.018: 63.3403% ( 431) 00:28:58.966 2681.018 - 2695.913: 64.0816% ( 389) 00:28:58.966 2695.913 - 2710.807: 64.8704% ( 414) 00:28:58.966 2710.807 - 2725.702: 65.6612% ( 415) 00:28:58.966 2725.702 - 2740.596: 66.4672% ( 423) 00:28:58.966 2740.596 - 2755.491: 67.2637% ( 418) 00:28:58.966 2755.491 - 2770.385: 68.0202% ( 397) 00:28:58.966 2770.385 - 2785.280: 68.8129% ( 416) 00:28:58.966 2785.280 - 2800.175: 69.5598% ( 392) 00:28:58.966 2800.175 - 2815.069: 70.3868% ( 434) 00:28:58.966 2815.069 - 2829.964: 71.1414% ( 396) 00:28:58.966 2829.964 - 2844.858: 71.9341% ( 416) 00:28:58.966 2844.858 - 2859.753: 72.7382% ( 422) 00:28:58.966 2859.753 - 2874.647: 73.5118% ( 406) 00:28:58.966 2874.647 - 2889.542: 74.2778% ( 402) 00:28:58.966 2889.542 - 2904.436: 75.0572% ( 409) 00:28:58.966 2904.436 - 2919.331: 75.8594% ( 421) 00:28:58.966 2919.331 - 2934.225: 76.6139% ( 396) 00:28:58.966 2934.225 - 2949.120: 77.3876% ( 406) 00:28:58.966 2949.120 - 2964.015: 78.1536% ( 402) 00:28:58.966 2964.015 - 2978.909: 78.9596% ( 423) 00:28:58.966 2978.909 - 2993.804: 79.7771% ( 429) 00:28:58.966 2993.804 - 3008.698: 80.5716% ( 417) 00:28:58.966 3008.698 - 3023.593: 81.3529% ( 410) 00:28:58.966 3023.593 - 3038.487: 82.1437% ( 415) 00:28:58.966 3038.487 - 3053.382: 82.9383% ( 417) 00:28:58.966 3053.382 - 3068.276: 83.7176% ( 409) 00:28:58.966 3068.276 - 3083.171: 84.4836% ( 402) 00:28:58.966 3083.171 - 3098.065: 85.2439% ( 399) 00:28:58.966 3098.065 - 3112.960: 86.0156% ( 405) 00:28:58.966 3112.960 - 3127.855: 86.7683% ( 395) 00:28:58.966 3127.855 - 3142.749: 87.4867% ( 377) 00:28:58.966 3142.749 - 3157.644: 88.2050% ( 377) 00:28:58.966 3157.644 - 3172.538: 88.9062% ( 368) 00:28:58.966 3172.538 - 3187.433: 89.5884% ( 358) 00:28:58.966 3187.433 - 3202.327: 90.2611% ( 353) 00:28:58.966 3202.327 - 3217.222: 90.8918% ( 331) 00:28:58.966 3217.222 - 3232.116: 91.4691% ( 303) 00:28:58.966 3232.116 - 3247.011: 92.0160% ( 287) 00:28:58.966 3247.011 - 3261.905: 92.5267% ( 268) 00:28:58.966 3261.905 - 3276.800: 93.0202% ( 259) 00:28:58.966 3276.800 - 3291.695: 93.4851% ( 244) 00:28:58.966 3291.695 - 3306.589: 93.9177% ( 227) 00:28:58.966 3306.589 - 3321.484: 94.3197% ( 211) 00:28:58.966 3321.484 - 3336.378: 94.6799% ( 189) 00:28:58.966 3336.378 - 3351.273: 94.9905% ( 163) 00:28:58.966 3351.273 - 3366.167: 95.3125% ( 169) 00:28:58.966 3366.167 - 3381.062: 95.6079% ( 155) 00:28:58.966 3381.062 - 3395.956: 95.8708% ( 138) 00:28:58.966 3395.956 - 3410.851: 96.1090% ( 125) 00:28:58.966 3410.851 - 3425.745: 96.3300% ( 116) 00:28:58.966 3425.745 - 3440.640: 96.5530% ( 117) 00:28:58.966 3440.640 - 3455.535: 96.7416% ( 99) 00:28:58.966 3455.535 - 3470.429: 96.9341% ( 101) 00:28:58.966 3470.429 - 3485.324: 97.1170% ( 96) 00:28:58.966 3485.324 - 3500.218: 97.2885% ( 90) 00:28:58.966 3500.218 - 3515.113: 97.4619% ( 91) 00:28:58.966 3515.113 - 3530.007: 97.6334% ( 90) 00:28:58.966 3530.007 - 3544.902: 97.7954% ( 85) 00:28:58.966 3544.902 - 3559.796: 97.9459% ( 79) 00:28:58.966 3559.796 - 3574.691: 98.0926% ( 77) 00:28:58.966 3574.691 - 3589.585: 98.2450% ( 80) 00:28:58.966 3589.585 - 3604.480: 98.3994% ( 81) 00:28:58.966 3604.480 - 3619.375: 98.5366% ( 72) 00:28:58.966 3619.375 - 3634.269: 98.6757% ( 73) 00:28:58.966 3634.269 - 3649.164: 98.8148% ( 73) 00:28:58.966 3649.164 - 3664.058: 98.9444% ( 68) 00:28:58.966 3664.058 - 3678.953: 99.0663% ( 64) 00:28:58.966 3678.953 - 3693.847: 99.1711% ( 55) 00:28:58.966 3693.847 - 3708.742: 99.2645% ( 49) 00:28:58.966 3708.742 - 3723.636: 99.3445% ( 42) 00:28:58.966 3723.636 - 3738.531: 99.4131% ( 36) 00:28:58.966 3738.531 - 3753.425: 99.4646% ( 27) 00:28:58.966 3753.425 - 3768.320: 99.5122% ( 25) 00:28:58.966 3768.320 - 3783.215: 99.5484% ( 19) 00:28:58.966 3783.215 - 3798.109: 99.5884% ( 21) 00:28:58.966 3798.109 - 3813.004: 99.6208% ( 17) 00:28:58.966 3813.004 - 3842.793: 99.6684% ( 25) 00:28:58.966 3842.793 - 3872.582: 99.7008% ( 17) 00:28:58.966 3872.582 - 3902.371: 99.7275% ( 14) 00:28:58.966 3902.371 - 3932.160: 99.7504% ( 12) 00:28:58.966 3932.160 - 3961.949: 99.7675% ( 9) 00:28:58.966 3961.949 - 3991.738: 99.7790% ( 6) 00:28:58.966 3991.738 - 4021.527: 99.7923% ( 7) 00:28:58.966 4021.527 - 4051.316: 99.8056% ( 7) 00:28:58.966 4051.316 - 4081.105: 99.8152% ( 5) 00:28:58.966 4081.105 - 4110.895: 99.8247% ( 5) 00:28:58.966 4110.895 - 4140.684: 99.8342% ( 5) 00:28:58.966 4140.684 - 4170.473: 99.8418% ( 4) 00:28:58.966 4170.473 - 4200.262: 99.8476% ( 3) 00:28:58.966 4200.262 - 4230.051: 99.8552% ( 4) 00:28:58.966 4230.051 - 4259.840: 99.8609% ( 3) 00:28:58.966 4259.840 - 4289.629: 99.8666% ( 3) 00:28:58.966 4289.629 - 4319.418: 99.8723% ( 3) 00:28:58.966 4319.418 - 4349.207: 99.8800% ( 4) 00:28:58.966 4349.207 - 4378.996: 99.8857% ( 3) 00:28:58.966 4378.996 - 4408.785: 99.8933% ( 4) 00:28:58.966 4408.785 - 4438.575: 99.8990% ( 3) 00:28:58.966 4438.575 - 4468.364: 99.9047% ( 3) 00:28:58.966 4468.364 - 4498.153: 99.9104% ( 3) 00:28:58.966 4498.153 - 4527.942: 99.9181% ( 4) 00:28:58.966 4527.942 - 4557.731: 99.9219% ( 2) 00:28:58.966 4557.731 - 4587.520: 99.9238% ( 1) 00:28:58.966 4587.520 - 4617.309: 99.9257% ( 1) 00:28:58.966 4617.309 - 4647.098: 99.9276% ( 1) 00:28:58.966 4647.098 - 4676.887: 99.9314% ( 2) 00:28:58.966 4676.887 - 4706.676: 99.9333% ( 1) 00:28:58.966 4706.676 - 4736.465: 99.9371% ( 2) 00:28:58.966 4736.465 - 4766.255: 99.9390% ( 1) 00:28:58.966 4766.255 - 4796.044: 99.9428% ( 2) 00:28:58.966 4796.044 - 4825.833: 99.9447% ( 1) 00:28:58.966 4825.833 - 4855.622: 99.9486% ( 2) 00:28:58.966 4855.622 - 4885.411: 99.9505% ( 1) 00:28:58.966 4885.411 - 4915.200: 99.9543% ( 2) 00:28:58.966 4915.200 - 4944.989: 99.9562% ( 1) 00:28:58.966 4944.989 - 4974.778: 99.9581% ( 1) 00:28:58.966 4974.778 - 5004.567: 99.9619% ( 2) 00:28:58.966 5004.567 - 5034.356: 99.9638% ( 1) 00:28:58.966 5034.356 - 5064.145: 99.9676% ( 2) 00:28:58.966 5064.145 - 5093.935: 99.9695% ( 1) 00:28:58.966 5093.935 - 5123.724: 99.9733% ( 2) 00:28:58.966 5123.724 - 5153.513: 99.9752% ( 1) 00:28:58.966 5153.513 - 5183.302: 99.9771% ( 1) 00:28:58.966 5183.302 - 5213.091: 99.9790% ( 1) 00:28:58.966 5213.091 - 5242.880: 99.9829% ( 2) 00:28:58.966 5242.880 - 5272.669: 99.9867% ( 2) 00:28:58.966 5272.669 - 5302.458: 99.9886% ( 1) 00:28:58.966 5302.458 - 5332.247: 99.9905% ( 1) 00:28:58.966 5332.247 - 5362.036: 99.9943% ( 2) 00:28:58.966 5362.036 - 5391.825: 99.9962% ( 1) 00:28:58.966 5391.825 - 5421.615: 99.9981% ( 1) 00:28:58.966 5451.404 - 5481.193: 100.0000% ( 1) 00:28:58.966 00:28:58.966 10:54:25 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:29:00.346 Initializing NVMe Controllers 00:29:00.346 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:00.346 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:00.346 Initialization complete. Launching workers. 00:29:00.346 ======================================================== 00:29:00.346 Latency(us) 00:29:00.346 Device Information : IOPS MiB/s Average min max 00:29:00.346 PCIE (0000:00:06.0) NSID 1 from core 0: 51971.95 609.05 2464.16 1010.36 10664.28 00:29:00.346 ======================================================== 00:29:00.346 Total : 51971.95 609.05 2464.16 1010.36 10664.28 00:29:00.346 00:29:00.346 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:00.346 ================================================================================= 00:29:00.346 1.00000% : 1563.927us 00:29:00.346 10.00000% : 1966.080us 00:29:00.346 25.00000% : 2144.815us 00:29:00.346 50.00000% : 2383.127us 00:29:00.346 75.00000% : 2695.913us 00:29:00.346 90.00000% : 3112.960us 00:29:00.346 95.00000% : 3381.062us 00:29:00.346 98.00000% : 3649.164us 00:29:00.346 99.00000% : 3872.582us 00:29:00.346 99.50000% : 4230.051us 00:29:00.346 99.90000% : 5362.036us 00:29:00.346 99.99000% : 10604.916us 00:29:00.346 99.99900% : 10664.495us 00:29:00.346 99.99990% : 10664.495us 00:29:00.346 99.99999% : 10664.495us 00:29:00.346 00:29:00.346 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:00.346 ============================================================================== 00:29:00.346 Range in us Cumulative IO count 00:29:00.346 1005.382 - 1012.829: 0.0231% ( 12) 00:29:00.346 1012.829 - 1020.276: 0.0289% ( 3) 00:29:00.346 1042.618 - 1050.065: 0.0327% ( 2) 00:29:00.346 1050.065 - 1057.513: 0.0385% ( 3) 00:29:00.346 1057.513 - 1064.960: 0.0404% ( 1) 00:29:00.346 1064.960 - 1072.407: 0.0462% ( 3) 00:29:00.346 1072.407 - 1079.855: 0.0520% ( 3) 00:29:00.346 1079.855 - 1087.302: 0.0577% ( 3) 00:29:00.346 1087.302 - 1094.749: 0.0654% ( 4) 00:29:00.346 1094.749 - 1102.196: 0.0731% ( 4) 00:29:00.346 1102.196 - 1109.644: 0.0770% ( 2) 00:29:00.346 1109.644 - 1117.091: 0.0847% ( 4) 00:29:00.346 1117.091 - 1124.538: 0.0904% ( 3) 00:29:00.346 1124.538 - 1131.985: 0.0943% ( 2) 00:29:00.346 1131.985 - 1139.433: 0.1001% ( 3) 00:29:00.346 1139.433 - 1146.880: 0.1058% ( 3) 00:29:00.346 1146.880 - 1154.327: 0.1116% ( 3) 00:29:00.346 1161.775 - 1169.222: 0.1212% ( 5) 00:29:00.346 1169.222 - 1176.669: 0.1251% ( 2) 00:29:00.346 1176.669 - 1184.116: 0.1366% ( 6) 00:29:00.346 1184.116 - 1191.564: 0.1405% ( 2) 00:29:00.346 1191.564 - 1199.011: 0.1462% ( 3) 00:29:00.346 1199.011 - 1206.458: 0.1578% ( 6) 00:29:00.346 1206.458 - 1213.905: 0.2155% ( 30) 00:29:00.346 1213.905 - 1221.353: 0.2559% ( 21) 00:29:00.346 1221.353 - 1228.800: 0.3579% ( 53) 00:29:00.346 1228.800 - 1236.247: 0.3848% ( 14) 00:29:00.346 1236.247 - 1243.695: 0.3983% ( 7) 00:29:00.346 1243.695 - 1251.142: 0.4272% ( 15) 00:29:00.346 1251.142 - 1258.589: 0.4348% ( 4) 00:29:00.346 1258.589 - 1266.036: 0.4368% ( 1) 00:29:00.346 1266.036 - 1273.484: 0.4445% ( 4) 00:29:00.346 1273.484 - 1280.931: 0.4502% ( 3) 00:29:00.346 1280.931 - 1288.378: 0.4695% ( 10) 00:29:00.346 1288.378 - 1295.825: 0.5118% ( 22) 00:29:00.346 1295.825 - 1303.273: 0.5311% ( 10) 00:29:00.346 1303.273 - 1310.720: 0.5695% ( 20) 00:29:00.346 1310.720 - 1318.167: 0.5753% ( 3) 00:29:00.346 1318.167 - 1325.615: 0.5849% ( 5) 00:29:00.346 1325.615 - 1333.062: 0.5946% ( 5) 00:29:00.346 1333.062 - 1340.509: 0.5984% ( 2) 00:29:00.346 1340.509 - 1347.956: 0.6080% ( 5) 00:29:00.346 1347.956 - 1355.404: 0.6157% ( 4) 00:29:00.346 1355.404 - 1362.851: 0.6196% ( 2) 00:29:00.346 1362.851 - 1370.298: 0.6273% ( 4) 00:29:00.346 1370.298 - 1377.745: 0.6350% ( 4) 00:29:00.346 1377.745 - 1385.193: 0.6504% ( 8) 00:29:00.346 1385.193 - 1392.640: 0.6600% ( 5) 00:29:00.346 1392.640 - 1400.087: 0.6734% ( 7) 00:29:00.346 1400.087 - 1407.535: 0.6831% ( 5) 00:29:00.346 1407.535 - 1414.982: 0.7081% ( 13) 00:29:00.346 1414.982 - 1422.429: 0.7793% ( 37) 00:29:00.346 1422.429 - 1429.876: 0.8062% ( 14) 00:29:00.346 1429.876 - 1437.324: 0.8524% ( 24) 00:29:00.346 1437.324 - 1444.771: 0.8659% ( 7) 00:29:00.346 1444.771 - 1452.218: 0.8735% ( 4) 00:29:00.346 1452.218 - 1459.665: 0.8793% ( 3) 00:29:00.346 1459.665 - 1467.113: 0.8832% ( 2) 00:29:00.346 1467.113 - 1474.560: 0.8909% ( 4) 00:29:00.346 1474.560 - 1482.007: 0.8966% ( 3) 00:29:00.346 1482.007 - 1489.455: 0.9063% ( 5) 00:29:00.346 1489.455 - 1496.902: 0.9178% ( 6) 00:29:00.346 1496.902 - 1504.349: 0.9217% ( 2) 00:29:00.346 1504.349 - 1511.796: 0.9370% ( 8) 00:29:00.346 1511.796 - 1519.244: 0.9409% ( 2) 00:29:00.346 1519.244 - 1526.691: 0.9486% ( 4) 00:29:00.346 1526.691 - 1534.138: 0.9582% ( 5) 00:29:00.346 1534.138 - 1541.585: 0.9698% ( 6) 00:29:00.346 1541.585 - 1549.033: 0.9832% ( 7) 00:29:00.346 1549.033 - 1556.480: 0.9928% ( 5) 00:29:00.346 1556.480 - 1563.927: 1.0082% ( 8) 00:29:00.346 1563.927 - 1571.375: 1.0332% ( 13) 00:29:00.346 1571.375 - 1578.822: 1.0486% ( 8) 00:29:00.346 1578.822 - 1586.269: 1.0717% ( 12) 00:29:00.346 1586.269 - 1593.716: 1.0910% ( 10) 00:29:00.346 1593.716 - 1601.164: 1.1160% ( 13) 00:29:00.346 1601.164 - 1608.611: 1.1372% ( 11) 00:29:00.346 1608.611 - 1616.058: 1.1737% ( 19) 00:29:00.346 1616.058 - 1623.505: 1.2661% ( 48) 00:29:00.346 1623.505 - 1630.953: 1.3142% ( 25) 00:29:00.346 1630.953 - 1638.400: 1.3834% ( 36) 00:29:00.346 1638.400 - 1645.847: 1.4354% ( 27) 00:29:00.346 1645.847 - 1653.295: 1.5451% ( 57) 00:29:00.346 1653.295 - 1660.742: 1.5893% ( 23) 00:29:00.346 1660.742 - 1668.189: 1.6316% ( 22) 00:29:00.346 1668.189 - 1675.636: 1.6874% ( 29) 00:29:00.346 1675.636 - 1683.084: 1.7760% ( 46) 00:29:00.346 1683.084 - 1690.531: 1.8375% ( 32) 00:29:00.347 1690.531 - 1697.978: 1.8972% ( 31) 00:29:00.347 1697.978 - 1705.425: 1.9530% ( 29) 00:29:00.347 1705.425 - 1712.873: 2.0165% ( 33) 00:29:00.347 1712.873 - 1720.320: 2.0857% ( 36) 00:29:00.347 1720.320 - 1727.767: 2.1396% ( 28) 00:29:00.347 1727.767 - 1735.215: 2.2108% ( 37) 00:29:00.347 1735.215 - 1742.662: 2.2743% ( 33) 00:29:00.347 1742.662 - 1750.109: 2.3609% ( 45) 00:29:00.347 1750.109 - 1757.556: 2.4321% ( 37) 00:29:00.347 1757.556 - 1765.004: 2.5187% ( 45) 00:29:00.347 1765.004 - 1772.451: 2.6168% ( 51) 00:29:00.347 1772.451 - 1779.898: 2.7207% ( 54) 00:29:00.347 1779.898 - 1787.345: 2.8304% ( 57) 00:29:00.347 1787.345 - 1794.793: 2.9805% ( 78) 00:29:00.347 1794.793 - 1802.240: 3.1151% ( 70) 00:29:00.347 1802.240 - 1809.687: 3.2460% ( 68) 00:29:00.347 1809.687 - 1817.135: 3.4057% ( 83) 00:29:00.347 1817.135 - 1824.582: 3.6231% ( 113) 00:29:00.347 1824.582 - 1832.029: 3.8194% ( 102) 00:29:00.347 1832.029 - 1839.476: 4.1311% ( 162) 00:29:00.347 1839.476 - 1846.924: 4.3985% ( 139) 00:29:00.347 1846.924 - 1854.371: 4.7198% ( 167) 00:29:00.347 1854.371 - 1861.818: 4.9334% ( 111) 00:29:00.347 1861.818 - 1869.265: 5.2163% ( 147) 00:29:00.347 1869.265 - 1876.713: 5.4741% ( 134) 00:29:00.347 1876.713 - 1884.160: 5.8512% ( 196) 00:29:00.347 1884.160 - 1891.607: 6.3053% ( 236) 00:29:00.347 1891.607 - 1899.055: 6.6324% ( 170) 00:29:00.347 1899.055 - 1906.502: 7.0211% ( 202) 00:29:00.347 1906.502 - 1921.396: 7.7580% ( 383) 00:29:00.347 1921.396 - 1936.291: 8.3660% ( 316) 00:29:00.347 1936.291 - 1951.185: 9.2954% ( 483) 00:29:00.347 1951.185 - 1966.080: 10.3825% ( 565) 00:29:00.347 1966.080 - 1980.975: 11.6255% ( 646) 00:29:00.347 1980.975 - 1995.869: 12.5529% ( 482) 00:29:00.347 1995.869 - 2010.764: 13.6362% ( 563) 00:29:00.347 2010.764 - 2025.658: 14.8195% ( 615) 00:29:00.347 2025.658 - 2040.553: 16.0259% ( 627) 00:29:00.347 2040.553 - 2055.447: 17.1862% ( 603) 00:29:00.347 2055.447 - 2070.342: 18.3426% ( 601) 00:29:00.347 2070.342 - 2085.236: 19.4759% ( 589) 00:29:00.347 2085.236 - 2100.131: 20.8016% ( 689) 00:29:00.347 2100.131 - 2115.025: 22.4429% ( 853) 00:29:00.347 2115.025 - 2129.920: 23.7705% ( 690) 00:29:00.347 2129.920 - 2144.815: 25.4021% ( 848) 00:29:00.347 2144.815 - 2159.709: 27.0857% ( 875) 00:29:00.347 2159.709 - 2174.604: 28.5269% ( 749) 00:29:00.347 2174.604 - 2189.498: 29.8430% ( 684) 00:29:00.347 2189.498 - 2204.393: 31.5112% ( 867) 00:29:00.347 2204.393 - 2219.287: 32.9639% ( 755) 00:29:00.347 2219.287 - 2234.182: 34.4801% ( 788) 00:29:00.347 2234.182 - 2249.076: 35.9751% ( 777) 00:29:00.347 2249.076 - 2263.971: 37.7838% ( 940) 00:29:00.347 2263.971 - 2278.865: 39.2769% ( 776) 00:29:00.347 2278.865 - 2293.760: 40.7546% ( 768) 00:29:00.347 2293.760 - 2308.655: 42.3805% ( 845) 00:29:00.347 2308.655 - 2323.549: 43.7101% ( 691) 00:29:00.347 2323.549 - 2338.444: 45.5842% ( 974) 00:29:00.347 2338.444 - 2353.338: 47.0638% ( 769) 00:29:00.347 2353.338 - 2368.233: 48.5608% ( 778) 00:29:00.347 2368.233 - 2383.127: 50.2828% ( 895) 00:29:00.347 2383.127 - 2398.022: 51.9068% ( 844) 00:29:00.347 2398.022 - 2412.916: 53.5558% ( 857) 00:29:00.347 2412.916 - 2427.811: 55.2317% ( 871) 00:29:00.347 2427.811 - 2442.705: 56.5285% ( 674) 00:29:00.347 2442.705 - 2457.600: 57.6984% ( 608) 00:29:00.347 2457.600 - 2472.495: 58.9394% ( 645) 00:29:00.347 2472.495 - 2487.389: 60.4672% ( 794) 00:29:00.347 2487.389 - 2502.284: 61.8294% ( 708) 00:29:00.347 2502.284 - 2517.178: 63.0416% ( 630) 00:29:00.347 2517.178 - 2532.073: 64.2288% ( 617) 00:29:00.347 2532.073 - 2546.967: 65.4872% ( 654) 00:29:00.347 2546.967 - 2561.862: 66.6667% ( 613) 00:29:00.347 2561.862 - 2576.756: 67.6999% ( 537) 00:29:00.347 2576.756 - 2591.651: 68.7293% ( 535) 00:29:00.347 2591.651 - 2606.545: 69.7549% ( 533) 00:29:00.347 2606.545 - 2621.440: 70.7323% ( 508) 00:29:00.347 2621.440 - 2636.335: 71.7117% ( 509) 00:29:00.347 2636.335 - 2651.229: 72.6757% ( 501) 00:29:00.347 2651.229 - 2666.124: 73.5550% ( 457) 00:29:00.347 2666.124 - 2681.018: 74.4228% ( 451) 00:29:00.347 2681.018 - 2695.913: 75.2136% ( 411) 00:29:00.347 2695.913 - 2710.807: 76.0467% ( 433) 00:29:00.347 2710.807 - 2725.702: 76.8029% ( 393) 00:29:00.347 2725.702 - 2740.596: 77.5552% ( 391) 00:29:00.347 2740.596 - 2755.491: 78.2556% ( 364) 00:29:00.347 2755.491 - 2770.385: 78.9040% ( 337) 00:29:00.347 2770.385 - 2785.280: 79.5659% ( 344) 00:29:00.347 2785.280 - 2800.175: 80.2471% ( 354) 00:29:00.347 2800.175 - 2815.069: 80.8724% ( 325) 00:29:00.347 2815.069 - 2829.964: 81.4381% ( 294) 00:29:00.347 2829.964 - 2844.858: 82.0307% ( 308) 00:29:00.347 2844.858 - 2859.753: 82.5810% ( 286) 00:29:00.347 2859.753 - 2874.647: 83.1409% ( 291) 00:29:00.347 2874.647 - 2889.542: 83.6893% ( 285) 00:29:00.347 2889.542 - 2904.436: 84.2396% ( 286) 00:29:00.347 2904.436 - 2919.331: 84.7706% ( 276) 00:29:00.347 2919.331 - 2934.225: 85.2671% ( 258) 00:29:00.347 2934.225 - 2949.120: 85.7789% ( 266) 00:29:00.347 2949.120 - 2964.015: 86.2522% ( 246) 00:29:00.347 2964.015 - 2978.909: 86.7044% ( 235) 00:29:00.347 2978.909 - 2993.804: 87.1565% ( 235) 00:29:00.347 2993.804 - 3008.698: 87.5645% ( 212) 00:29:00.347 3008.698 - 3023.593: 87.9743% ( 213) 00:29:00.347 3023.593 - 3038.487: 88.3784% ( 210) 00:29:00.347 3038.487 - 3053.382: 88.7863% ( 212) 00:29:00.347 3053.382 - 3068.276: 89.1519% ( 190) 00:29:00.347 3068.276 - 3083.171: 89.5309% ( 197) 00:29:00.347 3083.171 - 3098.065: 89.9061% ( 195) 00:29:00.347 3098.065 - 3112.960: 90.2524% ( 180) 00:29:00.347 3112.960 - 3127.855: 90.5872% ( 174) 00:29:00.347 3127.855 - 3142.749: 90.9336% ( 180) 00:29:00.347 3142.749 - 3157.644: 91.2530% ( 166) 00:29:00.347 3157.644 - 3172.538: 91.5493% ( 154) 00:29:00.347 3172.538 - 3187.433: 91.8687% ( 166) 00:29:00.347 3187.433 - 3202.327: 92.1554% ( 149) 00:29:00.347 3202.327 - 3217.222: 92.4363% ( 146) 00:29:00.347 3217.222 - 3232.116: 92.7442% ( 160) 00:29:00.347 3232.116 - 3247.011: 93.0135% ( 140) 00:29:00.347 3247.011 - 3261.905: 93.2771% ( 137) 00:29:00.347 3261.905 - 3276.800: 93.5446% ( 139) 00:29:00.347 3276.800 - 3291.695: 93.8082% ( 137) 00:29:00.347 3291.695 - 3306.589: 94.0352% ( 118) 00:29:00.347 3306.589 - 3321.484: 94.2777% ( 126) 00:29:00.347 3321.484 - 3336.378: 94.4893% ( 110) 00:29:00.347 3336.378 - 3351.273: 94.7241% ( 122) 00:29:00.347 3351.273 - 3366.167: 94.9473% ( 116) 00:29:00.347 3366.167 - 3381.062: 95.1474% ( 104) 00:29:00.347 3381.062 - 3395.956: 95.3475% ( 104) 00:29:00.347 3395.956 - 3410.851: 95.5380% ( 99) 00:29:00.347 3410.851 - 3425.745: 95.7150% ( 92) 00:29:00.347 3425.745 - 3440.640: 95.9132% ( 103) 00:29:00.347 3440.640 - 3455.535: 96.1229% ( 109) 00:29:00.347 3455.535 - 3470.429: 96.3057% ( 95) 00:29:00.347 3470.429 - 3485.324: 96.4923% ( 97) 00:29:00.347 3485.324 - 3500.218: 96.6636% ( 89) 00:29:00.347 3500.218 - 3515.113: 96.8233% ( 83) 00:29:00.347 3515.113 - 3530.007: 96.9753% ( 79) 00:29:00.347 3530.007 - 3544.902: 97.1196% ( 75) 00:29:00.347 3544.902 - 3559.796: 97.2735% ( 80) 00:29:00.347 3559.796 - 3574.691: 97.4063% ( 69) 00:29:00.347 3574.691 - 3589.585: 97.5448% ( 72) 00:29:00.347 3589.585 - 3604.480: 97.6776% ( 69) 00:29:00.347 3604.480 - 3619.375: 97.7969% ( 62) 00:29:00.347 3619.375 - 3634.269: 97.9162% ( 62) 00:29:00.347 3634.269 - 3649.164: 98.0162% ( 52) 00:29:00.347 3649.164 - 3664.058: 98.1144% ( 51) 00:29:00.347 3664.058 - 3678.953: 98.2029% ( 46) 00:29:00.347 3678.953 - 3693.847: 98.2952% ( 48) 00:29:00.347 3693.847 - 3708.742: 98.3799% ( 44) 00:29:00.347 3708.742 - 3723.636: 98.4607% ( 42) 00:29:00.347 3723.636 - 3738.531: 98.5281% ( 35) 00:29:00.347 3738.531 - 3753.425: 98.5954% ( 35) 00:29:00.347 3753.425 - 3768.320: 98.6608% ( 34) 00:29:00.347 3768.320 - 3783.215: 98.7185% ( 30) 00:29:00.347 3783.215 - 3798.109: 98.7820% ( 33) 00:29:00.347 3798.109 - 3813.004: 98.8340% ( 27) 00:29:00.347 3813.004 - 3842.793: 98.9263% ( 48) 00:29:00.347 3842.793 - 3872.582: 99.0072% ( 42) 00:29:00.347 3872.582 - 3902.371: 99.0822% ( 39) 00:29:00.347 3902.371 - 3932.160: 99.1380% ( 29) 00:29:00.347 3932.160 - 3961.949: 99.1823% ( 23) 00:29:00.347 3961.949 - 3991.738: 99.2227% ( 21) 00:29:00.347 3991.738 - 4021.527: 99.2611% ( 20) 00:29:00.347 4021.527 - 4051.316: 99.3015% ( 21) 00:29:00.347 4051.316 - 4081.105: 99.3477% ( 24) 00:29:00.347 4081.105 - 4110.895: 99.3824% ( 18) 00:29:00.347 4110.895 - 4140.684: 99.4228% ( 21) 00:29:00.347 4140.684 - 4170.473: 99.4555% ( 17) 00:29:00.347 4170.473 - 4200.262: 99.4959% ( 21) 00:29:00.347 4200.262 - 4230.051: 99.5228% ( 14) 00:29:00.347 4230.051 - 4259.840: 99.5459% ( 12) 00:29:00.347 4259.840 - 4289.629: 99.5671% ( 11) 00:29:00.347 4289.629 - 4319.418: 99.5902% ( 12) 00:29:00.347 4319.418 - 4349.207: 99.6094% ( 10) 00:29:00.347 4349.207 - 4378.996: 99.6267% ( 9) 00:29:00.347 4378.996 - 4408.785: 99.6402% ( 7) 00:29:00.347 4408.785 - 4438.575: 99.6537% ( 7) 00:29:00.347 4438.575 - 4468.364: 99.6710% ( 9) 00:29:00.347 4468.364 - 4498.153: 99.6902% ( 10) 00:29:00.347 4498.153 - 4527.942: 99.7037% ( 7) 00:29:00.347 4527.942 - 4557.731: 99.7133% ( 5) 00:29:00.348 4557.731 - 4587.520: 99.7249% ( 6) 00:29:00.348 4587.520 - 4617.309: 99.7364% ( 6) 00:29:00.348 4617.309 - 4647.098: 99.7460% ( 5) 00:29:00.348 4647.098 - 4676.887: 99.7518% ( 3) 00:29:00.348 4676.887 - 4706.676: 99.7595% ( 4) 00:29:00.348 4706.676 - 4736.465: 99.7653% ( 3) 00:29:00.348 4736.465 - 4766.255: 99.7749% ( 5) 00:29:00.348 4766.255 - 4796.044: 99.7807% ( 3) 00:29:00.348 4796.044 - 4825.833: 99.7883% ( 4) 00:29:00.348 4825.833 - 4855.622: 99.7941% ( 3) 00:29:00.348 4855.622 - 4885.411: 99.7999% ( 3) 00:29:00.348 4885.411 - 4915.200: 99.8095% ( 5) 00:29:00.348 4915.200 - 4944.989: 99.8191% ( 5) 00:29:00.348 4944.989 - 4974.778: 99.8268% ( 4) 00:29:00.348 4974.778 - 5004.567: 99.8307% ( 2) 00:29:00.348 5004.567 - 5034.356: 99.8365% ( 3) 00:29:00.348 5034.356 - 5064.145: 99.8403% ( 2) 00:29:00.348 5064.145 - 5093.935: 99.8441% ( 2) 00:29:00.348 5093.935 - 5123.724: 99.8480% ( 2) 00:29:00.348 5123.724 - 5153.513: 99.8557% ( 4) 00:29:00.348 5153.513 - 5183.302: 99.8634% ( 4) 00:29:00.348 5183.302 - 5213.091: 99.8692% ( 3) 00:29:00.348 5213.091 - 5242.880: 99.8749% ( 3) 00:29:00.348 5242.880 - 5272.669: 99.8788% ( 2) 00:29:00.348 5272.669 - 5302.458: 99.8903% ( 6) 00:29:00.348 5302.458 - 5332.247: 99.8961% ( 3) 00:29:00.348 5332.247 - 5362.036: 99.9019% ( 3) 00:29:00.348 5362.036 - 5391.825: 99.9057% ( 2) 00:29:00.348 5391.825 - 5421.615: 99.9096% ( 2) 00:29:00.348 5421.615 - 5451.404: 99.9153% ( 3) 00:29:00.348 5451.404 - 5481.193: 99.9192% ( 2) 00:29:00.348 5481.193 - 5510.982: 99.9230% ( 2) 00:29:00.348 5510.982 - 5540.771: 99.9269% ( 2) 00:29:00.348 5540.771 - 5570.560: 99.9307% ( 2) 00:29:00.348 5570.560 - 5600.349: 99.9327% ( 1) 00:29:00.348 5838.662 - 5868.451: 99.9346% ( 1) 00:29:00.348 6732.335 - 6762.124: 99.9365% ( 1) 00:29:00.348 6881.280 - 6911.069: 99.9384% ( 1) 00:29:00.348 7149.382 - 7179.171: 99.9423% ( 2) 00:29:00.348 7983.476 - 8043.055: 99.9442% ( 1) 00:29:00.348 8043.055 - 8102.633: 99.9461% ( 1) 00:29:00.348 10247.447 - 10307.025: 99.9500% ( 2) 00:29:00.348 10307.025 - 10366.604: 99.9634% ( 7) 00:29:00.348 10366.604 - 10426.182: 99.9711% ( 4) 00:29:00.348 10426.182 - 10485.760: 99.9808% ( 5) 00:29:00.348 10485.760 - 10545.338: 99.9846% ( 2) 00:29:00.348 10545.338 - 10604.916: 99.9923% ( 4) 00:29:00.348 10604.916 - 10664.495: 100.0000% ( 4) 00:29:00.348 00:29:00.348 10:54:26 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:29:00.348 00:29:00.348 real 0m2.573s 00:29:00.348 user 0m2.199s 00:29:00.348 sys 0m0.227s 00:29:00.348 10:54:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.348 ************************************ 00:29:00.348 END TEST nvme_perf 00:29:00.348 ************************************ 00:29:00.348 10:54:26 -- common/autotest_common.sh@10 -- # set +x 00:29:00.348 10:54:26 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:00.348 10:54:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:00.348 10:54:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.348 10:54:26 -- common/autotest_common.sh@10 -- # set +x 00:29:00.348 ************************************ 00:29:00.348 START TEST nvme_hello_world 00:29:00.348 ************************************ 00:29:00.348 10:54:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:00.607 Initializing NVMe Controllers 00:29:00.607 Attached to 0000:00:06.0 00:29:00.607 Namespace ID: 1 size: 5GB 00:29:00.607 Initialization complete. 00:29:00.607 INFO: using host memory buffer for IO 00:29:00.607 Hello world! 00:29:00.607 00:29:00.607 real 0m0.280s 00:29:00.607 user 0m0.112s 00:29:00.607 sys 0m0.093s 00:29:00.607 10:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.607 10:54:27 -- common/autotest_common.sh@10 -- # set +x 00:29:00.607 ************************************ 00:29:00.607 END TEST nvme_hello_world 00:29:00.607 ************************************ 00:29:00.607 10:54:27 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:00.607 10:54:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.607 10:54:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.607 10:54:27 -- common/autotest_common.sh@10 -- # set +x 00:29:00.607 ************************************ 00:29:00.607 START TEST nvme_sgl 00:29:00.607 ************************************ 00:29:00.607 10:54:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:00.865 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:29:00.865 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:29:00.865 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:29:00.865 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:29:00.865 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:29:00.865 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:29:00.865 NVMe Readv/Writev Request test 00:29:00.865 Attached to 0000:00:06.0 00:29:00.865 0000:00:06.0: build_io_request_2 test passed 00:29:00.865 0000:00:06.0: build_io_request_4 test passed 00:29:00.865 0000:00:06.0: build_io_request_5 test passed 00:29:00.865 0000:00:06.0: build_io_request_6 test passed 00:29:00.865 0000:00:06.0: build_io_request_7 test passed 00:29:00.865 0000:00:06.0: build_io_request_10 test passed 00:29:00.865 Cleaning up... 00:29:00.865 00:29:00.865 real 0m0.319s 00:29:00.865 user 0m0.116s 00:29:00.865 sys 0m0.124s 00:29:00.865 10:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.865 10:54:27 -- common/autotest_common.sh@10 -- # set +x 00:29:00.865 ************************************ 00:29:00.865 END TEST nvme_sgl 00:29:00.865 ************************************ 00:29:00.865 10:54:27 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:00.865 10:54:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.865 10:54:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.865 10:54:27 -- common/autotest_common.sh@10 -- # set +x 00:29:00.865 ************************************ 00:29:00.865 START TEST nvme_e2edp 00:29:00.865 ************************************ 00:29:00.865 10:54:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:01.123 NVMe Write/Read with End-to-End data protection test 00:29:01.123 Attached to 0000:00:06.0 00:29:01.123 Cleaning up... 00:29:01.381 00:29:01.381 real 0m0.262s 00:29:01.381 user 0m0.064s 00:29:01.381 sys 0m0.124s 00:29:01.381 10:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.381 10:54:27 -- common/autotest_common.sh@10 -- # set +x 00:29:01.381 ************************************ 00:29:01.381 END TEST nvme_e2edp 00:29:01.381 ************************************ 00:29:01.381 10:54:27 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:01.381 10:54:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.381 10:54:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.381 10:54:27 -- common/autotest_common.sh@10 -- # set +x 00:29:01.381 ************************************ 00:29:01.381 START TEST nvme_reserve 00:29:01.381 ************************************ 00:29:01.381 10:54:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:01.639 ===================================================== 00:29:01.639 NVMe Controller at PCI bus 0, device 6, function 0 00:29:01.639 ===================================================== 00:29:01.639 Reservations: Not Supported 00:29:01.639 Reservation test passed 00:29:01.639 00:29:01.639 real 0m0.265s 00:29:01.639 user 0m0.089s 00:29:01.639 sys 0m0.110s 00:29:01.639 10:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.639 10:54:28 -- common/autotest_common.sh@10 -- # set +x 00:29:01.639 ************************************ 00:29:01.639 END TEST nvme_reserve 00:29:01.639 ************************************ 00:29:01.639 10:54:28 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:01.639 10:54:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.639 10:54:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.639 10:54:28 -- common/autotest_common.sh@10 -- # set +x 00:29:01.639 ************************************ 00:29:01.639 START TEST nvme_err_injection 00:29:01.639 ************************************ 00:29:01.639 10:54:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:01.897 NVMe Error Injection test 00:29:01.897 Attached to 0000:00:06.0 00:29:01.897 0000:00:06.0: get features failed as expected 00:29:01.897 0000:00:06.0: get features successfully as expected 00:29:01.897 0000:00:06.0: read failed as expected 00:29:01.897 0000:00:06.0: read successfully as expected 00:29:01.897 Cleaning up... 00:29:01.898 00:29:01.898 real 0m0.269s 00:29:01.898 user 0m0.096s 00:29:01.898 sys 0m0.117s 00:29:01.898 10:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.898 10:54:28 -- common/autotest_common.sh@10 -- # set +x 00:29:01.898 ************************************ 00:29:01.898 END TEST nvme_err_injection 00:29:01.898 ************************************ 00:29:01.898 10:54:28 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:01.898 10:54:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:29:01.898 10:54:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.898 10:54:28 -- common/autotest_common.sh@10 -- # set +x 00:29:01.898 ************************************ 00:29:01.898 START TEST nvme_overhead 00:29:01.898 ************************************ 00:29:01.898 10:54:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:03.273 Initializing NVMe Controllers 00:29:03.273 Attached to 0000:00:06.0 00:29:03.273 Initialization complete. Launching workers. 00:29:03.273 submit (in ns) avg, min, max = 16193.4, 14353.2, 212600.0 00:29:03.273 complete (in ns) avg, min, max = 11488.4, 9868.6, 191175.9 00:29:03.273 00:29:03.273 Submit histogram 00:29:03.273 ================ 00:29:03.273 Range in us Cumulative Count 00:29:03.273 14.313 - 14.371: 0.0115% ( 1) 00:29:03.273 14.371 - 14.429: 0.0573% ( 4) 00:29:03.273 14.429 - 14.487: 0.4580% ( 35) 00:29:03.273 14.487 - 14.545: 2.9543% ( 218) 00:29:03.273 14.545 - 14.604: 10.1225% ( 626) 00:29:03.273 14.604 - 14.662: 20.4168% ( 899) 00:29:03.273 14.662 - 14.720: 32.2455% ( 1033) 00:29:03.273 14.720 - 14.778: 42.7574% ( 918) 00:29:03.273 14.778 - 14.836: 50.1546% ( 646) 00:29:03.273 14.836 - 14.895: 55.3418% ( 453) 00:29:03.273 14.895 - 15.011: 62.4986% ( 625) 00:29:03.273 15.011 - 15.127: 68.1095% ( 490) 00:29:03.274 15.127 - 15.244: 71.4760% ( 294) 00:29:03.274 15.244 - 15.360: 73.2394% ( 154) 00:29:03.274 15.360 - 15.476: 74.3502% ( 97) 00:29:03.274 15.476 - 15.593: 75.0487% ( 61) 00:29:03.274 15.593 - 15.709: 75.7472% ( 61) 00:29:03.274 15.709 - 15.825: 76.1823% ( 38) 00:29:03.274 15.825 - 15.942: 76.5258% ( 30) 00:29:03.274 15.942 - 16.058: 76.8579% ( 29) 00:29:03.274 16.058 - 16.175: 77.0984% ( 21) 00:29:03.274 16.175 - 16.291: 77.2701% ( 15) 00:29:03.274 16.291 - 16.407: 77.4190% ( 13) 00:29:03.274 16.407 - 16.524: 77.5564% ( 12) 00:29:03.274 16.524 - 16.640: 77.6366% ( 7) 00:29:03.274 16.640 - 16.756: 77.7396% ( 9) 00:29:03.274 16.756 - 16.873: 77.8656% ( 11) 00:29:03.274 16.873 - 16.989: 78.0144% ( 13) 00:29:03.274 16.989 - 17.105: 78.1289% ( 10) 00:29:03.274 17.105 - 17.222: 79.3885% ( 110) 00:29:03.274 17.222 - 17.338: 83.4421% ( 354) 00:29:03.274 17.338 - 17.455: 85.9727% ( 221) 00:29:03.274 17.455 - 17.571: 86.9804% ( 88) 00:29:03.274 17.571 - 17.687: 87.4385% ( 40) 00:29:03.274 17.687 - 17.804: 87.8278% ( 34) 00:29:03.274 17.804 - 17.920: 88.0911% ( 23) 00:29:03.274 17.920 - 18.036: 88.2515% ( 14) 00:29:03.274 18.036 - 18.153: 88.4919% ( 21) 00:29:03.274 18.153 - 18.269: 88.6637% ( 15) 00:29:03.274 18.269 - 18.385: 88.9271% ( 23) 00:29:03.274 18.385 - 18.502: 89.0759% ( 13) 00:29:03.274 18.502 - 18.618: 89.1904% ( 10) 00:29:03.274 18.618 - 18.735: 89.2820% ( 8) 00:29:03.274 18.735 - 18.851: 89.3851% ( 9) 00:29:03.274 18.851 - 18.967: 89.4309% ( 4) 00:29:03.274 18.967 - 19.084: 89.4767% ( 4) 00:29:03.274 19.084 - 19.200: 89.5111% ( 3) 00:29:03.274 19.200 - 19.316: 89.5569% ( 4) 00:29:03.274 19.316 - 19.433: 89.5798% ( 2) 00:29:03.274 19.549 - 19.665: 89.6141% ( 3) 00:29:03.274 19.665 - 19.782: 89.6714% ( 5) 00:29:03.274 19.782 - 19.898: 89.7401% ( 6) 00:29:03.274 19.898 - 20.015: 89.7630% ( 2) 00:29:03.274 20.015 - 20.131: 89.8317% ( 6) 00:29:03.274 20.131 - 20.247: 89.8775% ( 4) 00:29:03.274 20.247 - 20.364: 89.9805% ( 9) 00:29:03.274 20.364 - 20.480: 89.9920% ( 1) 00:29:03.274 20.480 - 20.596: 90.0034% ( 1) 00:29:03.274 20.596 - 20.713: 90.1294% ( 11) 00:29:03.274 20.713 - 20.829: 90.1981% ( 6) 00:29:03.274 20.829 - 20.945: 90.2210% ( 2) 00:29:03.274 20.945 - 21.062: 90.3012% ( 7) 00:29:03.274 21.062 - 21.178: 90.4042% ( 9) 00:29:03.274 21.178 - 21.295: 90.5531% ( 13) 00:29:03.274 21.295 - 21.411: 90.6676% ( 10) 00:29:03.274 21.411 - 21.527: 90.7706% ( 9) 00:29:03.274 21.527 - 21.644: 90.9195% ( 13) 00:29:03.274 21.644 - 21.760: 91.1371% ( 19) 00:29:03.274 21.760 - 21.876: 91.2401% ( 9) 00:29:03.274 21.876 - 21.993: 91.4233% ( 16) 00:29:03.274 21.993 - 22.109: 91.6638% ( 21) 00:29:03.274 22.109 - 22.225: 91.9272% ( 23) 00:29:03.274 22.225 - 22.342: 92.3394% ( 36) 00:29:03.274 22.342 - 22.458: 92.6944% ( 31) 00:29:03.274 22.458 - 22.575: 92.9119% ( 19) 00:29:03.274 22.575 - 22.691: 93.2898% ( 33) 00:29:03.274 22.691 - 22.807: 93.4730% ( 16) 00:29:03.274 22.807 - 22.924: 93.8051% ( 29) 00:29:03.274 22.924 - 23.040: 94.0570% ( 22) 00:29:03.274 23.040 - 23.156: 94.3662% ( 27) 00:29:03.274 23.156 - 23.273: 94.6983% ( 29) 00:29:03.274 23.273 - 23.389: 95.0303% ( 29) 00:29:03.274 23.389 - 23.505: 95.3052% ( 24) 00:29:03.274 23.505 - 23.622: 95.4998% ( 17) 00:29:03.274 23.622 - 23.738: 95.6830% ( 16) 00:29:03.274 23.738 - 23.855: 95.8548% ( 15) 00:29:03.274 23.855 - 23.971: 96.0609% ( 18) 00:29:03.274 23.971 - 24.087: 96.2899% ( 20) 00:29:03.274 24.087 - 24.204: 96.4388% ( 13) 00:29:03.274 24.204 - 24.320: 96.6335% ( 17) 00:29:03.274 24.320 - 24.436: 96.8052% ( 15) 00:29:03.274 24.436 - 24.553: 96.9541% ( 13) 00:29:03.274 24.553 - 24.669: 97.0571% ( 9) 00:29:03.274 24.669 - 24.785: 97.2175% ( 14) 00:29:03.274 24.785 - 24.902: 97.3434% ( 11) 00:29:03.274 24.902 - 25.018: 97.4007% ( 5) 00:29:03.274 25.018 - 25.135: 97.4694% ( 6) 00:29:03.274 25.135 - 25.251: 97.5724% ( 9) 00:29:03.274 25.251 - 25.367: 97.6869% ( 10) 00:29:03.274 25.367 - 25.484: 97.7785% ( 8) 00:29:03.274 25.484 - 25.600: 97.8358% ( 5) 00:29:03.274 25.600 - 25.716: 97.9389% ( 9) 00:29:03.274 25.716 - 25.833: 97.9732% ( 3) 00:29:03.274 25.833 - 25.949: 98.0419% ( 6) 00:29:03.274 25.949 - 26.065: 98.0992% ( 5) 00:29:03.274 26.065 - 26.182: 98.1679% ( 6) 00:29:03.274 26.182 - 26.298: 98.1908% ( 2) 00:29:03.274 26.298 - 26.415: 98.2824% ( 8) 00:29:03.274 26.415 - 26.531: 98.3282% ( 4) 00:29:03.274 26.531 - 26.647: 98.3969% ( 6) 00:29:03.274 26.647 - 26.764: 98.4312% ( 3) 00:29:03.274 26.764 - 26.880: 98.4656% ( 3) 00:29:03.274 26.880 - 26.996: 98.4999% ( 3) 00:29:03.274 26.996 - 27.113: 98.5572% ( 5) 00:29:03.274 27.113 - 27.229: 98.5801% ( 2) 00:29:03.274 27.229 - 27.345: 98.6030% ( 2) 00:29:03.274 27.345 - 27.462: 98.6145% ( 1) 00:29:03.274 27.462 - 27.578: 98.6946% ( 7) 00:29:03.274 27.578 - 27.695: 98.7404% ( 4) 00:29:03.274 27.695 - 27.811: 98.7519% ( 1) 00:29:03.274 27.811 - 27.927: 98.7633% ( 1) 00:29:03.274 28.044 - 28.160: 98.7977% ( 3) 00:29:03.274 28.276 - 28.393: 98.8206% ( 2) 00:29:03.274 28.509 - 28.625: 98.8320% ( 1) 00:29:03.274 28.625 - 28.742: 98.8435% ( 1) 00:29:03.274 28.742 - 28.858: 98.8549% ( 1) 00:29:03.274 28.975 - 29.091: 98.9007% ( 4) 00:29:03.274 29.091 - 29.207: 98.9236% ( 2) 00:29:03.274 29.207 - 29.324: 98.9351% ( 1) 00:29:03.274 29.324 - 29.440: 99.0496% ( 10) 00:29:03.274 29.440 - 29.556: 99.1755% ( 11) 00:29:03.274 29.556 - 29.673: 99.2557% ( 7) 00:29:03.274 29.673 - 29.789: 99.3015% ( 4) 00:29:03.274 29.789 - 30.022: 99.3817% ( 7) 00:29:03.274 30.022 - 30.255: 99.4046% ( 2) 00:29:03.274 30.255 - 30.487: 99.4160% ( 1) 00:29:03.274 30.487 - 30.720: 99.4389% ( 2) 00:29:03.274 30.720 - 30.953: 99.4504% ( 1) 00:29:03.274 30.953 - 31.185: 99.4847% ( 3) 00:29:03.274 31.185 - 31.418: 99.5076% ( 2) 00:29:03.274 31.418 - 31.651: 99.5191% ( 1) 00:29:03.274 31.651 - 31.884: 99.5305% ( 1) 00:29:03.274 31.884 - 32.116: 99.5420% ( 1) 00:29:03.274 32.116 - 32.349: 99.5534% ( 1) 00:29:03.274 32.582 - 32.815: 99.5649% ( 1) 00:29:03.274 32.815 - 33.047: 99.5763% ( 1) 00:29:03.274 33.047 - 33.280: 99.5878% ( 1) 00:29:03.274 33.513 - 33.745: 99.6221% ( 3) 00:29:03.274 33.745 - 33.978: 99.6336% ( 1) 00:29:03.274 33.978 - 34.211: 99.6450% ( 1) 00:29:03.274 34.211 - 34.444: 99.6679% ( 2) 00:29:03.274 34.676 - 34.909: 99.6794% ( 1) 00:29:03.274 34.909 - 35.142: 99.6908% ( 1) 00:29:03.274 35.142 - 35.375: 99.7137% ( 2) 00:29:03.274 35.607 - 35.840: 99.7252% ( 1) 00:29:03.274 36.073 - 36.305: 99.7366% ( 1) 00:29:03.274 36.538 - 36.771: 99.7481% ( 1) 00:29:03.274 37.236 - 37.469: 99.7595% ( 1) 00:29:03.274 37.469 - 37.702: 99.7710% ( 1) 00:29:03.274 37.702 - 37.935: 99.7824% ( 1) 00:29:03.274 38.167 - 38.400: 99.7939% ( 1) 00:29:03.274 38.633 - 38.865: 99.8053% ( 1) 00:29:03.274 39.564 - 39.796: 99.8168% ( 1) 00:29:03.274 40.262 - 40.495: 99.8282% ( 1) 00:29:03.274 40.727 - 40.960: 99.8511% ( 2) 00:29:03.274 41.658 - 41.891: 99.8740% ( 2) 00:29:03.274 42.589 - 42.822: 99.8855% ( 1) 00:29:03.274 44.684 - 44.916: 99.8969% ( 1) 00:29:03.274 47.709 - 47.942: 99.9084% ( 1) 00:29:03.274 49.804 - 50.036: 99.9198% ( 1) 00:29:03.274 61.440 - 61.905: 99.9313% ( 1) 00:29:03.274 64.233 - 64.698: 99.9427% ( 1) 00:29:03.274 70.749 - 71.215: 99.9542% ( 1) 00:29:03.274 71.215 - 71.680: 99.9656% ( 1) 00:29:03.274 74.007 - 74.473: 99.9771% ( 1) 00:29:03.274 105.193 - 105.658: 99.9885% ( 1) 00:29:03.274 212.247 - 213.178: 100.0000% ( 1) 00:29:03.274 00:29:03.274 Complete histogram 00:29:03.274 ================== 00:29:03.274 Range in us Cumulative Count 00:29:03.274 9.833 - 9.891: 0.0115% ( 1) 00:29:03.274 9.949 - 10.007: 0.0344% ( 2) 00:29:03.274 10.007 - 10.065: 0.9848% ( 83) 00:29:03.274 10.065 - 10.124: 8.4965% ( 656) 00:29:03.274 10.124 - 10.182: 23.0505% ( 1271) 00:29:03.274 10.182 - 10.240: 36.2189% ( 1150) 00:29:03.274 10.240 - 10.298: 48.1278% ( 1040) 00:29:03.274 10.298 - 10.356: 56.1777% ( 703) 00:29:03.274 10.356 - 10.415: 61.5711% ( 471) 00:29:03.274 10.415 - 10.473: 65.1781% ( 315) 00:29:03.274 10.473 - 10.531: 67.5942% ( 211) 00:29:03.274 10.531 - 10.589: 69.6553% ( 180) 00:29:03.274 10.589 - 10.647: 71.0867% ( 125) 00:29:03.274 10.647 - 10.705: 72.3806% ( 113) 00:29:03.274 10.705 - 10.764: 73.1936% ( 71) 00:29:03.275 10.764 - 10.822: 73.9494% ( 66) 00:29:03.275 10.822 - 10.880: 74.3845% ( 38) 00:29:03.275 10.880 - 10.938: 74.8082% ( 37) 00:29:03.275 10.938 - 10.996: 75.1174% ( 27) 00:29:03.275 10.996 - 11.055: 75.3464% ( 20) 00:29:03.275 11.055 - 11.113: 75.4609% ( 10) 00:29:03.275 11.113 - 11.171: 75.5640% ( 9) 00:29:03.275 11.171 - 11.229: 75.7815% ( 19) 00:29:03.275 11.229 - 11.287: 76.0563% ( 24) 00:29:03.275 11.287 - 11.345: 76.9495% ( 78) 00:29:03.275 11.345 - 11.404: 78.8732% ( 168) 00:29:03.275 11.404 - 11.462: 81.8046% ( 256) 00:29:03.275 11.462 - 11.520: 84.1292% ( 203) 00:29:03.275 11.520 - 11.578: 85.6636% ( 134) 00:29:03.275 11.578 - 11.636: 86.4079% ( 65) 00:29:03.275 11.636 - 11.695: 86.6827% ( 24) 00:29:03.275 11.695 - 11.753: 86.8774% ( 17) 00:29:03.275 11.753 - 11.811: 87.1178% ( 21) 00:29:03.275 11.811 - 11.869: 87.2438% ( 11) 00:29:03.275 11.869 - 11.927: 87.3697% ( 11) 00:29:03.275 11.927 - 11.985: 87.5530% ( 16) 00:29:03.275 11.985 - 12.044: 87.6675% ( 10) 00:29:03.275 12.044 - 12.102: 87.7018% ( 3) 00:29:03.275 12.102 - 12.160: 87.8049% ( 9) 00:29:03.275 12.160 - 12.218: 87.8850% ( 7) 00:29:03.275 12.218 - 12.276: 87.9537% ( 6) 00:29:03.275 12.276 - 12.335: 87.9995% ( 4) 00:29:03.275 12.335 - 12.393: 88.0339% ( 3) 00:29:03.275 12.393 - 12.451: 88.0911% ( 5) 00:29:03.275 12.451 - 12.509: 88.1141% ( 2) 00:29:03.275 12.509 - 12.567: 88.1599% ( 4) 00:29:03.275 12.567 - 12.625: 88.2171% ( 5) 00:29:03.275 12.625 - 12.684: 88.2744% ( 5) 00:29:03.275 12.684 - 12.742: 88.3202% ( 4) 00:29:03.275 12.742 - 12.800: 88.3545% ( 3) 00:29:03.275 12.800 - 12.858: 88.3889% ( 3) 00:29:03.275 12.858 - 12.916: 88.4461% ( 5) 00:29:03.275 12.975 - 13.033: 88.4805% ( 3) 00:29:03.275 13.091 - 13.149: 88.5263% ( 4) 00:29:03.275 13.149 - 13.207: 88.5377% ( 1) 00:29:03.275 13.265 - 13.324: 88.5492% ( 1) 00:29:03.275 13.324 - 13.382: 88.5835% ( 3) 00:29:03.275 13.382 - 13.440: 88.6064% ( 2) 00:29:03.275 13.440 - 13.498: 88.6408% ( 3) 00:29:03.275 13.498 - 13.556: 88.6751% ( 3) 00:29:03.275 13.556 - 13.615: 88.7209% ( 4) 00:29:03.275 13.615 - 13.673: 88.7782% ( 5) 00:29:03.275 13.673 - 13.731: 88.8126% ( 3) 00:29:03.275 13.731 - 13.789: 88.8469% ( 3) 00:29:03.275 13.847 - 13.905: 88.8927% ( 4) 00:29:03.275 13.905 - 13.964: 88.9156% ( 2) 00:29:03.275 13.964 - 14.022: 88.9958% ( 7) 00:29:03.275 14.022 - 14.080: 89.0301% ( 3) 00:29:03.275 14.080 - 14.138: 89.0759% ( 4) 00:29:03.275 14.138 - 14.196: 89.0988% ( 2) 00:29:03.275 14.196 - 14.255: 89.1217% ( 2) 00:29:03.275 14.255 - 14.313: 89.2248% ( 9) 00:29:03.275 14.313 - 14.371: 89.2820% ( 5) 00:29:03.275 14.371 - 14.429: 89.3507% ( 6) 00:29:03.275 14.429 - 14.487: 89.4080% ( 5) 00:29:03.275 14.487 - 14.545: 89.4423% ( 3) 00:29:03.275 14.545 - 14.604: 89.4996% ( 5) 00:29:03.275 14.604 - 14.662: 89.5111% ( 1) 00:29:03.275 14.662 - 14.720: 89.5683% ( 5) 00:29:03.275 14.720 - 14.778: 89.5912% ( 2) 00:29:03.275 14.778 - 14.836: 89.6141% ( 2) 00:29:03.275 14.836 - 14.895: 89.6714% ( 5) 00:29:03.275 14.895 - 15.011: 89.7286% ( 5) 00:29:03.275 15.011 - 15.127: 89.7744% ( 4) 00:29:03.275 15.127 - 15.244: 89.8546% ( 7) 00:29:03.275 15.244 - 15.360: 89.9233% ( 6) 00:29:03.275 15.360 - 15.476: 89.9691% ( 4) 00:29:03.275 15.476 - 15.593: 90.0034% ( 3) 00:29:03.275 15.593 - 15.709: 90.0950% ( 8) 00:29:03.275 15.709 - 15.825: 90.1752% ( 7) 00:29:03.275 15.825 - 15.942: 90.2439% ( 6) 00:29:03.275 15.942 - 16.058: 90.2783% ( 3) 00:29:03.275 16.058 - 16.175: 90.4042% ( 11) 00:29:03.275 16.175 - 16.291: 90.4958% ( 8) 00:29:03.275 16.291 - 16.407: 90.6103% ( 10) 00:29:03.275 16.407 - 16.524: 90.6905% ( 7) 00:29:03.275 16.524 - 16.640: 90.8164% ( 11) 00:29:03.275 16.640 - 16.756: 90.9310% ( 10) 00:29:03.275 16.756 - 16.873: 91.0798% ( 13) 00:29:03.275 16.873 - 16.989: 91.2172% ( 12) 00:29:03.275 16.989 - 17.105: 91.4233% ( 18) 00:29:03.275 17.105 - 17.222: 91.7211% ( 26) 00:29:03.275 17.222 - 17.338: 91.9043% ( 16) 00:29:03.275 17.338 - 17.455: 92.1104% ( 18) 00:29:03.275 17.455 - 17.571: 92.3623% ( 22) 00:29:03.275 17.571 - 17.687: 92.6600% ( 26) 00:29:03.275 17.687 - 17.804: 92.9234% ( 23) 00:29:03.275 17.804 - 17.920: 93.3013% ( 33) 00:29:03.275 17.920 - 18.036: 93.6906% ( 34) 00:29:03.275 18.036 - 18.153: 93.9883% ( 26) 00:29:03.275 18.153 - 18.269: 94.4120% ( 37) 00:29:03.275 18.269 - 18.385: 94.7670% ( 31) 00:29:03.275 18.385 - 18.502: 95.0647% ( 26) 00:29:03.275 18.502 - 18.618: 95.3395% ( 24) 00:29:03.275 18.618 - 18.735: 95.5914% ( 22) 00:29:03.275 18.735 - 18.851: 95.9350% ( 30) 00:29:03.275 18.851 - 18.967: 96.0838% ( 13) 00:29:03.275 18.967 - 19.084: 96.2670% ( 16) 00:29:03.275 19.084 - 19.200: 96.4159% ( 13) 00:29:03.275 19.200 - 19.316: 96.6106% ( 17) 00:29:03.275 19.316 - 19.433: 96.7480% ( 12) 00:29:03.275 19.433 - 19.549: 96.9655% ( 19) 00:29:03.275 19.549 - 19.665: 97.1373% ( 15) 00:29:03.275 19.665 - 19.782: 97.2633% ( 11) 00:29:03.275 19.782 - 19.898: 97.3434% ( 7) 00:29:03.275 19.898 - 20.015: 97.3892% ( 4) 00:29:03.275 20.015 - 20.131: 97.5037% ( 10) 00:29:03.275 20.131 - 20.247: 97.5839% ( 7) 00:29:03.275 20.247 - 20.364: 97.6984% ( 10) 00:29:03.275 20.364 - 20.480: 97.8930% ( 17) 00:29:03.275 20.480 - 20.596: 97.9847% ( 8) 00:29:03.275 20.596 - 20.713: 98.1793% ( 17) 00:29:03.275 20.713 - 20.829: 98.3625% ( 16) 00:29:03.275 20.829 - 20.945: 98.4083% ( 4) 00:29:03.275 20.945 - 21.062: 98.5572% ( 13) 00:29:03.275 21.062 - 21.178: 98.6030% ( 4) 00:29:03.275 21.178 - 21.295: 98.6832% ( 7) 00:29:03.275 21.295 - 21.411: 98.7633% ( 7) 00:29:03.275 21.411 - 21.527: 98.8320% ( 6) 00:29:03.275 21.527 - 21.644: 98.8549% ( 2) 00:29:03.275 21.644 - 21.760: 98.9007% ( 4) 00:29:03.275 21.760 - 21.876: 98.9580% ( 5) 00:29:03.275 21.876 - 21.993: 98.9694% ( 1) 00:29:03.275 21.993 - 22.109: 98.9923% ( 2) 00:29:03.275 22.109 - 22.225: 99.0381% ( 4) 00:29:03.275 22.225 - 22.342: 99.0610% ( 2) 00:29:03.275 22.342 - 22.458: 99.1297% ( 6) 00:29:03.275 22.458 - 22.575: 99.1412% ( 1) 00:29:03.275 22.575 - 22.691: 99.2213% ( 7) 00:29:03.275 22.691 - 22.807: 99.2328% ( 1) 00:29:03.275 22.807 - 22.924: 99.2557% ( 2) 00:29:03.275 22.924 - 23.040: 99.2900% ( 3) 00:29:03.275 23.040 - 23.156: 99.3130% ( 2) 00:29:03.275 23.156 - 23.273: 99.3244% ( 1) 00:29:03.275 23.622 - 23.738: 99.3359% ( 1) 00:29:03.275 23.855 - 23.971: 99.3588% ( 2) 00:29:03.275 23.971 - 24.087: 99.3702% ( 1) 00:29:03.275 24.087 - 24.204: 99.3817% ( 1) 00:29:03.275 24.320 - 24.436: 99.4046% ( 2) 00:29:03.275 24.669 - 24.785: 99.4160% ( 1) 00:29:03.275 25.018 - 25.135: 99.4389% ( 2) 00:29:03.275 25.367 - 25.484: 99.4504% ( 1) 00:29:03.275 25.484 - 25.600: 99.4618% ( 1) 00:29:03.275 26.065 - 26.182: 99.4733% ( 1) 00:29:03.275 26.415 - 26.531: 99.4847% ( 1) 00:29:03.275 26.764 - 26.880: 99.4962% ( 1) 00:29:03.275 26.880 - 26.996: 99.5191% ( 2) 00:29:03.275 26.996 - 27.113: 99.5305% ( 1) 00:29:03.275 27.578 - 27.695: 99.5534% ( 2) 00:29:03.275 27.811 - 27.927: 99.5649% ( 1) 00:29:03.275 27.927 - 28.044: 99.5763% ( 1) 00:29:03.275 28.160 - 28.276: 99.5878% ( 1) 00:29:03.275 28.393 - 28.509: 99.5992% ( 1) 00:29:03.275 28.509 - 28.625: 99.6221% ( 2) 00:29:03.275 28.742 - 28.858: 99.6336% ( 1) 00:29:03.275 29.091 - 29.207: 99.6450% ( 1) 00:29:03.275 29.207 - 29.324: 99.6565% ( 1) 00:29:03.275 29.556 - 29.673: 99.6679% ( 1) 00:29:03.275 29.789 - 30.022: 99.6794% ( 1) 00:29:03.275 30.953 - 31.185: 99.7023% ( 2) 00:29:03.275 31.418 - 31.651: 99.7137% ( 1) 00:29:03.275 32.116 - 32.349: 99.7252% ( 1) 00:29:03.275 32.349 - 32.582: 99.7366% ( 1) 00:29:03.275 33.513 - 33.745: 99.7481% ( 1) 00:29:03.275 33.745 - 33.978: 99.7595% ( 1) 00:29:03.275 34.211 - 34.444: 99.7710% ( 1) 00:29:03.275 34.676 - 34.909: 99.7824% ( 1) 00:29:03.275 34.909 - 35.142: 99.7939% ( 1) 00:29:03.275 35.607 - 35.840: 99.8053% ( 1) 00:29:03.275 36.073 - 36.305: 99.8168% ( 1) 00:29:03.275 36.771 - 37.004: 99.8282% ( 1) 00:29:03.275 37.004 - 37.236: 99.8397% ( 1) 00:29:03.275 38.633 - 38.865: 99.8511% ( 1) 00:29:03.275 40.960 - 41.193: 99.8626% ( 1) 00:29:03.275 43.753 - 43.985: 99.8740% ( 1) 00:29:03.275 45.382 - 45.615: 99.8969% ( 2) 00:29:03.275 50.735 - 50.967: 99.9084% ( 1) 00:29:03.275 50.967 - 51.200: 99.9313% ( 2) 00:29:03.275 52.131 - 52.364: 99.9427% ( 1) 00:29:03.275 59.113 - 59.345: 99.9542% ( 1) 00:29:03.275 76.800 - 77.265: 99.9656% ( 1) 00:29:03.276 121.949 - 122.880: 99.9771% ( 1) 00:29:03.276 138.705 - 139.636: 99.9885% ( 1) 00:29:03.276 190.836 - 191.767: 100.0000% ( 1) 00:29:03.276 00:29:03.276 00:29:03.276 real 0m1.281s 00:29:03.276 user 0m1.086s 00:29:03.276 sys 0m0.120s 00:29:03.276 10:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.276 10:54:29 -- common/autotest_common.sh@10 -- # set +x 00:29:03.276 ************************************ 00:29:03.276 END TEST nvme_overhead 00:29:03.276 ************************************ 00:29:03.276 10:54:29 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:03.276 10:54:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:03.276 10:54:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:03.276 10:54:29 -- common/autotest_common.sh@10 -- # set +x 00:29:03.276 ************************************ 00:29:03.276 START TEST nvme_arbitration 00:29:03.276 ************************************ 00:29:03.276 10:54:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:06.554 Initializing NVMe Controllers 00:29:06.554 Attached to 0000:00:06.0 00:29:06.554 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:29:06.554 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:29:06.554 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:29:06.554 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:29:06.554 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:29:06.554 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:29:06.554 Initialization complete. Launching workers. 00:29:06.554 Starting thread on core 1 with urgent priority queue 00:29:06.554 Starting thread on core 2 with urgent priority queue 00:29:06.554 Starting thread on core 3 with urgent priority queue 00:29:06.554 Starting thread on core 0 with urgent priority queue 00:29:06.554 QEMU NVMe Ctrl (12340 ) core 0: 6974.00 IO/s 14.34 secs/100000 ios 00:29:06.554 QEMU NVMe Ctrl (12340 ) core 1: 7035.00 IO/s 14.21 secs/100000 ios 00:29:06.554 QEMU NVMe Ctrl (12340 ) core 2: 3696.33 IO/s 27.05 secs/100000 ios 00:29:06.554 QEMU NVMe Ctrl (12340 ) core 3: 4017.33 IO/s 24.89 secs/100000 ios 00:29:06.554 ======================================================== 00:29:06.554 00:29:06.554 00:29:06.554 real 0m3.333s 00:29:06.554 user 0m9.178s 00:29:06.554 sys 0m0.096s 00:29:06.554 10:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.554 10:54:33 -- common/autotest_common.sh@10 -- # set +x 00:29:06.554 ************************************ 00:29:06.554 END TEST nvme_arbitration 00:29:06.554 ************************************ 00:29:06.554 10:54:33 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:29:06.554 10:54:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:06.554 10:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.554 10:54:33 -- common/autotest_common.sh@10 -- # set +x 00:29:06.555 ************************************ 00:29:06.555 START TEST nvme_single_aen 00:29:06.555 ************************************ 00:29:06.555 10:54:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:29:06.555 [2024-07-24 10:54:33.237912] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:06.555 [2024-07-24 10:54:33.238042] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.813 [2024-07-24 10:54:33.403044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:06.813 Asynchronous Event Request test 00:29:06.813 Attached to 0000:00:06.0 00:29:06.813 Reset controller to setup AER completions for this process 00:29:06.813 Registering asynchronous event callbacks... 00:29:06.813 Getting orig temperature thresholds of all controllers 00:29:06.813 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:06.813 Setting all controllers temperature threshold low to trigger AER 00:29:06.813 Waiting for all controllers temperature threshold to be set lower 00:29:06.813 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:06.813 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:29:06.813 Waiting for all controllers to trigger AER and reset threshold 00:29:06.813 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:06.813 Cleaning up... 00:29:06.813 00:29:06.813 real 0m0.238s 00:29:06.813 user 0m0.071s 00:29:06.813 sys 0m0.100s 00:29:06.813 10:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.813 10:54:33 -- common/autotest_common.sh@10 -- # set +x 00:29:06.813 ************************************ 00:29:06.813 END TEST nvme_single_aen 00:29:06.813 ************************************ 00:29:06.813 10:54:33 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:29:06.813 10:54:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:06.813 10:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.813 10:54:33 -- common/autotest_common.sh@10 -- # set +x 00:29:06.813 ************************************ 00:29:06.813 START TEST nvme_doorbell_aers 00:29:06.813 ************************************ 00:29:06.813 10:54:33 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:29:06.813 10:54:33 -- nvme/nvme.sh@70 -- # bdfs=() 00:29:06.813 10:54:33 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:29:06.813 10:54:33 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:29:06.813 10:54:33 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:29:06.813 10:54:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:06.813 10:54:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:06.813 10:54:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:06.813 10:54:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:06.813 10:54:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:07.070 10:54:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:07.070 10:54:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:07.070 10:54:33 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:29:07.070 10:54:33 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:07.357 [2024-07-24 10:54:33.805162] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149443) is not found. Dropping the request. 00:29:17.331 Executing: test_write_invalid_db 00:29:17.331 Waiting for AER completion... 00:29:17.331 Failure: test_write_invalid_db 00:29:17.331 00:29:17.331 Executing: test_invalid_db_write_overflow_sq 00:29:17.331 Waiting for AER completion... 00:29:17.331 Failure: test_invalid_db_write_overflow_sq 00:29:17.331 00:29:17.331 Executing: test_invalid_db_write_overflow_cq 00:29:17.331 Waiting for AER completion... 00:29:17.331 Failure: test_invalid_db_write_overflow_cq 00:29:17.331 00:29:17.331 00:29:17.331 real 0m10.098s 00:29:17.331 user 0m8.528s 00:29:17.331 sys 0m1.488s 00:29:17.331 10:54:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.331 10:54:43 -- common/autotest_common.sh@10 -- # set +x 00:29:17.331 ************************************ 00:29:17.331 END TEST nvme_doorbell_aers 00:29:17.331 ************************************ 00:29:17.331 10:54:43 -- nvme/nvme.sh@97 -- # uname 00:29:17.331 10:54:43 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:29:17.331 10:54:43 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:17.331 10:54:43 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:29:17.331 10:54:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.331 10:54:43 -- common/autotest_common.sh@10 -- # set +x 00:29:17.331 ************************************ 00:29:17.331 START TEST nvme_multi_aen 00:29:17.331 ************************************ 00:29:17.331 10:54:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:17.331 [2024-07-24 10:54:43.673093] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:17.331 [2024-07-24 10:54:43.673208] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.331 [2024-07-24 10:54:43.836916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:17.331 [2024-07-24 10:54:43.836977] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149443) is not found. Dropping the request. 00:29:17.331 [2024-07-24 10:54:43.837074] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149443) is not found. Dropping the request. 00:29:17.331 [2024-07-24 10:54:43.837102] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149443) is not found. Dropping the request. 00:29:17.331 [2024-07-24 10:54:43.843649] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:17.331 [2024-07-24 10:54:43.843916] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.331 Child process pid: 149632 00:29:17.590 [Child] Asynchronous Event Request test 00:29:17.590 [Child] Attached to 0000:00:06.0 00:29:17.590 [Child] Registering asynchronous event callbacks... 00:29:17.590 [Child] Getting orig temperature thresholds of all controllers 00:29:17.590 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:17.590 [Child] Waiting for all controllers to trigger AER and reset threshold 00:29:17.590 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:17.590 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:17.590 [Child] Cleaning up... 00:29:17.590 Asynchronous Event Request test 00:29:17.590 Attached to 0000:00:06.0 00:29:17.590 Reset controller to setup AER completions for this process 00:29:17.590 Registering asynchronous event callbacks... 00:29:17.590 Getting orig temperature thresholds of all controllers 00:29:17.590 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:17.590 Setting all controllers temperature threshold low to trigger AER 00:29:17.590 Waiting for all controllers temperature threshold to be set lower 00:29:17.590 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:17.590 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:29:17.590 Waiting for all controllers to trigger AER and reset threshold 00:29:17.590 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:17.590 Cleaning up... 00:29:17.590 00:29:17.590 real 0m0.507s 00:29:17.590 user 0m0.160s 00:29:17.590 sys 0m0.166s 00:29:17.590 10:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.590 10:54:44 -- common/autotest_common.sh@10 -- # set +x 00:29:17.590 ************************************ 00:29:17.590 END TEST nvme_multi_aen 00:29:17.590 ************************************ 00:29:17.590 10:54:44 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:29:17.590 10:54:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:17.590 10:54:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.590 10:54:44 -- common/autotest_common.sh@10 -- # set +x 00:29:17.590 ************************************ 00:29:17.590 START TEST nvme_startup 00:29:17.590 ************************************ 00:29:17.590 10:54:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:29:17.849 Initializing NVMe Controllers 00:29:17.849 Attached to 0000:00:06.0 00:29:17.849 Initialization complete. 00:29:17.849 Time used:198017.672 (us). 00:29:17.849 00:29:17.849 real 0m0.276s 00:29:17.849 user 0m0.082s 00:29:17.849 sys 0m0.124s 00:29:17.849 10:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.849 10:54:44 -- common/autotest_common.sh@10 -- # set +x 00:29:17.849 ************************************ 00:29:17.849 END TEST nvme_startup 00:29:17.849 ************************************ 00:29:17.849 10:54:44 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:29:17.849 10:54:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:17.849 10:54:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:17.849 10:54:44 -- common/autotest_common.sh@10 -- # set +x 00:29:17.849 ************************************ 00:29:17.849 START TEST nvme_multi_secondary 00:29:17.849 ************************************ 00:29:17.849 10:54:44 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:29:17.849 10:54:44 -- nvme/nvme.sh@52 -- # pid0=149697 00:29:17.849 10:54:44 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:29:17.849 10:54:44 -- nvme/nvme.sh@54 -- # pid1=149698 00:29:17.849 10:54:44 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:29:17.849 10:54:44 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:22.059 Initializing NVMe Controllers 00:29:22.059 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:22.059 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:22.059 Initialization complete. Launching workers. 00:29:22.059 ======================================================== 00:29:22.059 Latency(us) 00:29:22.059 Device Information : IOPS MiB/s Average min max 00:29:22.059 PCIE (0000:00:06.0) NSID 1 from core 1: 35623.72 139.16 448.80 116.14 4450.94 00:29:22.059 ======================================================== 00:29:22.059 Total : 35623.72 139.16 448.80 116.14 4450.94 00:29:22.059 00:29:22.059 Initializing NVMe Controllers 00:29:22.059 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:22.059 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:22.059 Initialization complete. Launching workers. 00:29:22.059 ======================================================== 00:29:22.059 Latency(us) 00:29:22.059 Device Information : IOPS MiB/s Average min max 00:29:22.059 PCIE (0000:00:06.0) NSID 1 from core 2: 14329.95 55.98 1115.88 150.12 17417.31 00:29:22.059 ======================================================== 00:29:22.059 Total : 14329.95 55.98 1115.88 150.12 17417.31 00:29:22.059 00:29:22.060 10:54:48 -- nvme/nvme.sh@56 -- # wait 149697 00:29:23.437 Initializing NVMe Controllers 00:29:23.437 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:23.437 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:23.437 Initialization complete. Launching workers. 00:29:23.437 ======================================================== 00:29:23.437 Latency(us) 00:29:23.437 Device Information : IOPS MiB/s Average min max 00:29:23.437 PCIE (0000:00:06.0) NSID 1 from core 0: 44211.41 172.70 361.58 95.96 1806.47 00:29:23.437 ======================================================== 00:29:23.437 Total : 44211.41 172.70 361.58 95.96 1806.47 00:29:23.437 00:29:23.437 10:54:49 -- nvme/nvme.sh@57 -- # wait 149698 00:29:23.437 10:54:49 -- nvme/nvme.sh@61 -- # pid0=149771 00:29:23.437 10:54:49 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:29:23.437 10:54:49 -- nvme/nvme.sh@63 -- # pid1=149772 00:29:23.437 10:54:49 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:29:23.437 10:54:49 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:26.729 Initializing NVMe Controllers 00:29:26.729 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:26.729 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:26.729 Initialization complete. Launching workers. 00:29:26.729 ======================================================== 00:29:26.729 Latency(us) 00:29:26.729 Device Information : IOPS MiB/s Average min max 00:29:26.729 PCIE (0000:00:06.0) NSID 1 from core 1: 36636.32 143.11 436.34 145.63 2738.21 00:29:26.729 ======================================================== 00:29:26.729 Total : 36636.32 143.11 436.34 145.63 2738.21 00:29:26.729 00:29:26.985 Initializing NVMe Controllers 00:29:26.985 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:26.985 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:26.985 Initialization complete. Launching workers. 00:29:26.985 ======================================================== 00:29:26.985 Latency(us) 00:29:26.985 Device Information : IOPS MiB/s Average min max 00:29:26.985 PCIE (0000:00:06.0) NSID 1 from core 0: 35455.44 138.50 450.92 135.78 3540.61 00:29:26.985 ======================================================== 00:29:26.985 Total : 35455.44 138.50 450.92 135.78 3540.61 00:29:26.985 00:29:28.885 Initializing NVMe Controllers 00:29:28.885 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:28.885 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:28.885 Initialization complete. Launching workers. 00:29:28.885 ======================================================== 00:29:28.885 Latency(us) 00:29:28.885 Device Information : IOPS MiB/s Average min max 00:29:28.885 PCIE (0000:00:06.0) NSID 1 from core 2: 18682.57 72.98 855.96 125.97 28295.06 00:29:28.885 ======================================================== 00:29:28.885 Total : 18682.57 72.98 855.96 125.97 28295.06 00:29:28.885 00:29:28.885 10:54:55 -- nvme/nvme.sh@65 -- # wait 149771 00:29:28.885 10:54:55 -- nvme/nvme.sh@66 -- # wait 149772 00:29:28.885 00:29:28.885 real 0m10.807s 00:29:28.885 user 0m18.537s 00:29:28.885 sys 0m0.735s 00:29:28.885 10:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:28.885 10:54:55 -- common/autotest_common.sh@10 -- # set +x 00:29:28.885 ************************************ 00:29:28.885 END TEST nvme_multi_secondary 00:29:28.885 ************************************ 00:29:28.885 10:54:55 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:29:28.885 10:54:55 -- nvme/nvme.sh@102 -- # kill_stub 00:29:28.885 10:54:55 -- common/autotest_common.sh@1065 -- # [[ -e /proc/149007 ]] 00:29:28.885 10:54:55 -- common/autotest_common.sh@1066 -- # kill 149007 00:29:28.885 10:54:55 -- common/autotest_common.sh@1067 -- # wait 149007 00:29:29.819 [2024-07-24 10:54:56.199776] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149631) is not found. Dropping the request. 00:29:29.819 [2024-07-24 10:54:56.200075] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149631) is not found. Dropping the request. 00:29:29.819 [2024-07-24 10:54:56.200239] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149631) is not found. Dropping the request. 00:29:29.819 [2024-07-24 10:54:56.200391] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 149631) is not found. Dropping the request. 00:29:29.819 10:54:56 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:29:29.819 10:54:56 -- common/autotest_common.sh@1073 -- # echo 2 00:29:29.819 10:54:56 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:29.819 10:54:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:29.819 10:54:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:29.819 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:29:29.819 ************************************ 00:29:29.819 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:29.819 ************************************ 00:29:29.819 10:54:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:29.819 * Looking for test storage... 00:29:29.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:29.819 10:54:56 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:29.819 10:54:56 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:29.819 10:54:56 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:29.819 10:54:56 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:29.819 10:54:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:29.819 10:54:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:29.819 10:54:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:29.819 10:54:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:29.819 10:54:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:29.819 10:54:56 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:29.819 10:54:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:29.819 10:54:56 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:29:29.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.819 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:29:29.820 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:29:29.820 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=149935 00:29:29.820 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:29.820 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:29.820 10:54:56 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 149935 00:29:29.820 10:54:56 -- common/autotest_common.sh@819 -- # '[' -z 149935 ']' 00:29:29.820 10:54:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.820 10:54:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:29.820 10:54:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.820 10:54:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:29.820 10:54:56 -- common/autotest_common.sh@10 -- # set +x 00:29:29.820 [2024-07-24 10:54:56.508879] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:30.078 [2024-07-24 10:54:56.509625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149935 ] 00:29:30.078 [2024-07-24 10:54:56.697032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.336 [2024-07-24 10:54:56.781427] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:30.336 [2024-07-24 10:54:56.781857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.336 [2024-07-24 10:54:56.781984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:30.336 [2024-07-24 10:54:56.782114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.336 [2024-07-24 10:54:56.782114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:30.903 10:54:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:30.903 10:54:57 -- common/autotest_common.sh@852 -- # return 0 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:29:30.903 10:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.903 10:54:57 -- common/autotest_common.sh@10 -- # set +x 00:29:30.903 nvme0n1 00:29:30.903 10:54:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ogZJw.txt 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:30.903 10:54:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.903 10:54:57 -- common/autotest_common.sh@10 -- # set +x 00:29:30.903 true 00:29:30.903 10:54:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721818497 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=149960 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:30.903 10:54:57 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:33.433 10:54:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.433 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:29:33.433 [2024-07-24 10:54:59.545596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:33.433 [2024-07-24 10:54:59.546117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.433 [2024-07-24 10:54:59.546216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:33.433 [2024-07-24 10:54:59.546280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.433 [2024-07-24 10:54:59.548240] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:33.433 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 149960 00:29:33.433 10:54:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 149960 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 149960 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:33.433 10:54:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.433 10:54:59 -- common/autotest_common.sh@10 -- # set +x 00:29:33.433 10:54:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ogZJw.txt 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:33.433 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ogZJw.txt 00:29:33.434 10:54:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 149935 00:29:33.434 10:54:59 -- common/autotest_common.sh@926 -- # '[' -z 149935 ']' 00:29:33.434 10:54:59 -- common/autotest_common.sh@930 -- # kill -0 149935 00:29:33.434 10:54:59 -- common/autotest_common.sh@931 -- # uname 00:29:33.434 10:54:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:33.434 10:54:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 149935 00:29:33.434 killing process with pid 149935 00:29:33.434 10:54:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:33.434 10:54:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:33.434 10:54:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 149935' 00:29:33.434 10:54:59 -- common/autotest_common.sh@945 -- # kill 149935 00:29:33.434 10:54:59 -- common/autotest_common.sh@950 -- # wait 149935 00:29:33.692 10:55:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:33.692 10:55:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:33.692 00:29:33.692 real 0m3.827s 00:29:33.692 user 0m13.730s 00:29:33.692 sys 0m0.535s 00:29:33.692 10:55:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:33.692 ************************************ 00:29:33.692 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:33.692 ************************************ 00:29:33.692 10:55:00 -- common/autotest_common.sh@10 -- # set +x 00:29:33.692 10:55:00 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:33.692 10:55:00 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:33.692 10:55:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:33.692 10:55:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:33.692 10:55:00 -- common/autotest_common.sh@10 -- # set +x 00:29:33.692 ************************************ 00:29:33.692 START TEST nvme_fio 00:29:33.692 ************************************ 00:29:33.692 10:55:00 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:29:33.692 10:55:00 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:33.692 10:55:00 -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:33.692 10:55:00 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:33.692 10:55:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:33.692 10:55:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:33.692 10:55:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:33.692 10:55:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:33.692 10:55:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:33.692 10:55:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:33.692 10:55:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:33.692 10:55:00 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:29:33.692 10:55:00 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:33.692 10:55:00 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:33.692 10:55:00 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:33.692 10:55:00 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:33.951 10:55:00 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:33.951 10:55:00 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:34.209 10:55:00 -- nvme/nvme.sh@41 -- # bs=4096 00:29:34.209 10:55:00 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:34.209 10:55:00 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:34.209 10:55:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:34.209 10:55:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:34.209 10:55:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:34.209 10:55:00 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:34.209 10:55:00 -- common/autotest_common.sh@1320 -- # shift 00:29:34.209 10:55:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:34.209 10:55:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:34.209 10:55:00 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:34.209 10:55:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:34.209 10:55:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:34.209 10:55:00 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:34.209 10:55:00 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:34.209 10:55:00 -- common/autotest_common.sh@1326 -- # break 00:29:34.209 10:55:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:34.209 10:55:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:34.209 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:34.209 fio-3.35 00:29:34.209 Starting 1 thread 00:29:37.496 00:29:37.496 test: (groupid=0, jobs=1): err= 0: pid=150084: Wed Jul 24 10:55:04 2024 00:29:37.496 read: IOPS=18.8k, BW=73.3MiB/s (76.9MB/s)(147MiB/2001msec) 00:29:37.496 slat (nsec): min=4445, max=66967, avg=5438.28, stdev=1410.93 00:29:37.496 clat (usec): min=263, max=8344, avg=3392.97, stdev=400.24 00:29:37.496 lat (usec): min=268, max=8411, avg=3398.41, stdev=400.65 00:29:37.496 clat percentiles (usec): 00:29:37.496 | 1.00th=[ 2769], 5.00th=[ 2966], 10.00th=[ 3064], 20.00th=[ 3163], 00:29:37.496 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3326], 60.00th=[ 3392], 00:29:37.496 | 70.00th=[ 3458], 80.00th=[ 3556], 90.00th=[ 3818], 95.00th=[ 4113], 00:29:37.496 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 7373], 99.95th=[ 7439], 00:29:37.496 | 99.99th=[ 8160] 00:29:37.496 bw ( KiB/s): min=69883, max=80520, per=99.92%, avg=75015.25, stdev=4481.62, samples=4 00:29:37.496 iops : min=17470, max=20130, avg=18753.50, stdev=1120.75, samples=4 00:29:37.496 write: IOPS=18.8k, BW=73.3MiB/s (76.9MB/s)(147MiB/2001msec); 0 zone resets 00:29:37.497 slat (nsec): min=4531, max=41885, avg=5571.17, stdev=1387.48 00:29:37.497 clat (usec): min=235, max=8190, avg=3405.60, stdev=395.04 00:29:37.497 lat (usec): min=240, max=8203, avg=3411.17, stdev=395.42 00:29:37.497 clat percentiles (usec): 00:29:37.497 | 1.00th=[ 2802], 5.00th=[ 2999], 10.00th=[ 3064], 20.00th=[ 3163], 00:29:37.497 | 30.00th=[ 3228], 40.00th=[ 3294], 50.00th=[ 3359], 60.00th=[ 3425], 00:29:37.497 | 70.00th=[ 3490], 80.00th=[ 3556], 90.00th=[ 3851], 95.00th=[ 4113], 00:29:37.497 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 7373], 99.95th=[ 7439], 00:29:37.497 | 99.99th=[ 8029] 00:29:37.497 bw ( KiB/s): min=70084, max=80528, per=99.92%, avg=75039.50, stdev=4402.88, samples=4 00:29:37.497 iops : min=17521, max=20132, avg=18759.75, stdev=1100.78, samples=4 00:29:37.497 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:29:37.497 lat (msec) : 2=0.28%, 4=91.86%, 10=7.83% 00:29:37.497 cpu : usr=100.05%, sys=0.00%, ctx=4, majf=0, minf=40 00:29:37.497 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:37.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:37.497 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:37.497 issued rwts: total=37556,37569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:37.497 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:37.497 00:29:37.497 Run status group 0 (all jobs): 00:29:37.497 READ: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:29:37.497 WRITE: bw=73.3MiB/s (76.9MB/s), 73.3MiB/s-73.3MiB/s (76.9MB/s-76.9MB/s), io=147MiB (154MB), run=2001-2001msec 00:29:37.755 ----------------------------------------------------- 00:29:37.755 Suppressions used: 00:29:37.755 count bytes template 00:29:37.755 1 32 /usr/src/fio/parse.c 00:29:37.755 ----------------------------------------------------- 00:29:37.755 00:29:37.755 10:55:04 -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:37.755 10:55:04 -- nvme/nvme.sh@46 -- # true 00:29:37.755 00:29:37.755 real 0m4.147s 00:29:37.755 user 0m3.502s 00:29:37.755 sys 0m0.315s 00:29:37.755 10:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.755 10:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:37.755 ************************************ 00:29:37.755 END TEST nvme_fio 00:29:37.755 ************************************ 00:29:37.755 ************************************ 00:29:37.755 END TEST nvme 00:29:37.755 ************************************ 00:29:37.755 00:29:37.755 real 0m44.805s 00:29:37.755 user 1m57.150s 00:29:37.755 sys 0m7.658s 00:29:37.755 10:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.755 10:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:37.755 10:55:04 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:29:37.755 10:55:04 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:37.755 10:55:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:37.755 10:55:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:37.755 10:55:04 -- common/autotest_common.sh@10 -- # set +x 00:29:37.755 ************************************ 00:29:37.755 START TEST nvme_scc 00:29:37.755 ************************************ 00:29:37.755 10:55:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:38.014 * Looking for test storage... 00:29:38.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:38.014 10:55:04 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:38.014 10:55:04 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:38.014 10:55:04 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:38.014 10:55:04 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:38.014 10:55:04 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:38.014 10:55:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.014 10:55:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.014 10:55:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.014 10:55:04 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:38.014 10:55:04 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:38.014 10:55:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:38.014 10:55:04 -- paths/export.sh@5 -- # export PATH 00:29:38.014 10:55:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:38.014 10:55:04 -- nvme/functions.sh@10 -- # ctrls=() 00:29:38.014 10:55:04 -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:38.014 10:55:04 -- nvme/functions.sh@11 -- # nvmes=() 00:29:38.014 10:55:04 -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:38.014 10:55:04 -- nvme/functions.sh@12 -- # bdfs=() 00:29:38.014 10:55:04 -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:38.014 10:55:04 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:38.014 10:55:04 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:38.014 10:55:04 -- nvme/functions.sh@14 -- # nvme_name= 00:29:38.014 10:55:04 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:38.014 10:55:04 -- nvme/nvme_scc.sh@12 -- # uname 00:29:38.014 10:55:04 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:29:38.014 10:55:04 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:29:38.014 10:55:04 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:38.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:38.272 Waiting for block devices as requested 00:29:38.272 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:38.532 10:55:04 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:38.532 10:55:04 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:38.532 10:55:04 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:38.532 10:55:04 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:38.532 10:55:04 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:29:38.532 10:55:04 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:29:38.532 10:55:05 -- scripts/common.sh@15 -- # local i 00:29:38.532 10:55:05 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:38.532 10:55:05 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:38.532 10:55:05 -- scripts/common.sh@24 -- # return 0 00:29:38.532 10:55:05 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:38.532 10:55:05 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:38.532 10:55:05 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:38.532 10:55:05 -- nvme/functions.sh@18 -- # shift 00:29:38.532 10:55:05 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:38.532 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.532 10:55:05 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:38.532 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.532 10:55:05 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:38.532 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.532 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.532 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:38.532 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.533 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.533 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:38.533 10:55:05 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.534 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.534 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:38.534 10:55:05 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:38.535 10:55:05 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:38.535 10:55:05 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:38.535 10:55:05 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:38.535 10:55:05 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@18 -- # shift 00:29:38.535 10:55:05 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.535 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.535 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:38.535 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:38.536 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.536 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.536 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:38.537 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:38.537 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:38.537 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.537 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.537 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:38.537 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:38.537 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:38.537 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.537 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.537 10:55:05 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:38.537 10:55:05 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:38.537 10:55:05 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:38.537 10:55:05 -- nvme/functions.sh@21 -- # IFS=: 00:29:38.537 10:55:05 -- nvme/functions.sh@21 -- # read -r reg val 00:29:38.537 10:55:05 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:38.537 10:55:05 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:38.537 10:55:05 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:29:38.537 10:55:05 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:29:38.537 10:55:05 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:38.537 10:55:05 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:38.537 10:55:05 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:38.537 10:55:05 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:38.537 10:55:05 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:29:38.537 10:55:05 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:38.537 10:55:05 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:38.537 10:55:05 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:38.537 10:55:05 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:38.537 10:55:05 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:38.537 10:55:05 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:38.537 10:55:05 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:38.537 10:55:05 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:38.537 10:55:05 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:38.537 10:55:05 -- nvme/functions.sh@76 -- # echo 0x15d 00:29:38.537 10:55:05 -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:38.537 10:55:05 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:38.537 10:55:05 -- nvme/functions.sh@197 -- # echo nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:38.537 10:55:05 -- nvme/functions.sh@206 -- # echo nvme0 00:29:38.537 10:55:05 -- nvme/functions.sh@207 -- # return 0 00:29:38.537 10:55:05 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:29:38.537 10:55:05 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:29:38.537 10:55:05 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:39.104 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:39.104 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:40.038 10:55:06 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:40.038 10:55:06 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:40.038 10:55:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:40.038 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:40.038 ************************************ 00:29:40.038 START TEST nvme_simple_copy 00:29:40.038 ************************************ 00:29:40.038 10:55:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:40.310 Initializing NVMe Controllers 00:29:40.310 Attaching to 0000:00:06.0 00:29:40.310 Controller supports SCC. Attached to 0000:00:06.0 00:29:40.310 Namespace ID: 1 size: 5GB 00:29:40.310 Initialization complete. 00:29:40.310 00:29:40.310 Controller QEMU NVMe Ctrl (12340 ) 00:29:40.310 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:40.310 Namespace Block Size:4096 00:29:40.310 Writing LBAs 0 to 63 with Random Data 00:29:40.310 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:40.310 LBAs matching Written Data: 64 00:29:40.310 00:29:40.310 real 0m0.269s 00:29:40.310 user 0m0.114s 00:29:40.310 sys 0m0.057s 00:29:40.310 10:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.310 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:40.310 ************************************ 00:29:40.310 END TEST nvme_simple_copy 00:29:40.310 ************************************ 00:29:40.580 ************************************ 00:29:40.580 END TEST nvme_scc 00:29:40.580 ************************************ 00:29:40.580 00:29:40.580 real 0m2.564s 00:29:40.580 user 0m0.746s 00:29:40.580 sys 0m1.718s 00:29:40.580 10:55:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.580 10:55:06 -- common/autotest_common.sh@10 -- # set +x 00:29:40.580 10:55:07 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:29:40.580 10:55:07 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:29:40.580 10:55:07 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:29:40.580 10:55:07 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:29:40.580 10:55:07 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:29:40.580 10:55:07 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:40.580 10:55:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:40.580 10:55:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:40.580 10:55:07 -- common/autotest_common.sh@10 -- # set +x 00:29:40.580 ************************************ 00:29:40.580 START TEST nvme_rpc 00:29:40.580 ************************************ 00:29:40.580 10:55:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:40.580 * Looking for test storage... 00:29:40.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:40.580 10:55:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:40.580 10:55:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:40.580 10:55:07 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:40.580 10:55:07 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:40.580 10:55:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:40.580 10:55:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:40.580 10:55:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:40.580 10:55:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:40.580 10:55:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:40.580 10:55:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:40.580 10:55:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:40.580 10:55:07 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=150566 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:40.580 10:55:07 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 150566 00:29:40.580 10:55:07 -- common/autotest_common.sh@819 -- # '[' -z 150566 ']' 00:29:40.580 10:55:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.580 10:55:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:40.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.580 10:55:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.580 10:55:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:40.580 10:55:07 -- common/autotest_common.sh@10 -- # set +x 00:29:40.580 [2024-07-24 10:55:07.239424] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:40.580 [2024-07-24 10:55:07.239611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150566 ] 00:29:40.838 [2024-07-24 10:55:07.389761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:40.838 [2024-07-24 10:55:07.466425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:40.838 [2024-07-24 10:55:07.466878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.838 [2024-07-24 10:55:07.466894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.771 10:55:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:41.771 10:55:08 -- common/autotest_common.sh@852 -- # return 0 00:29:41.771 10:55:08 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:42.028 Nvme0n1 00:29:42.028 10:55:08 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:42.028 10:55:08 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:42.286 request: 00:29:42.286 { 00:29:42.286 "filename": "non_existing_file", 00:29:42.286 "bdev_name": "Nvme0n1", 00:29:42.286 "method": "bdev_nvme_apply_firmware", 00:29:42.286 "req_id": 1 00:29:42.286 } 00:29:42.286 Got JSON-RPC error response 00:29:42.286 response: 00:29:42.286 { 00:29:42.286 "code": -32603, 00:29:42.286 "message": "open file failed." 00:29:42.286 } 00:29:42.286 10:55:08 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:42.286 10:55:08 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:42.286 10:55:08 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:42.545 10:55:09 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:42.545 10:55:09 -- nvme/nvme_rpc.sh@40 -- # killprocess 150566 00:29:42.545 10:55:09 -- common/autotest_common.sh@926 -- # '[' -z 150566 ']' 00:29:42.545 10:55:09 -- common/autotest_common.sh@930 -- # kill -0 150566 00:29:42.545 10:55:09 -- common/autotest_common.sh@931 -- # uname 00:29:42.545 10:55:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:42.545 10:55:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150566 00:29:42.545 10:55:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:42.545 10:55:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:42.545 killing process with pid 150566 00:29:42.545 10:55:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150566' 00:29:42.545 10:55:09 -- common/autotest_common.sh@945 -- # kill 150566 00:29:42.545 10:55:09 -- common/autotest_common.sh@950 -- # wait 150566 00:29:43.111 00:29:43.111 real 0m2.555s 00:29:43.111 user 0m5.314s 00:29:43.111 sys 0m0.602s 00:29:43.111 10:55:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.111 ************************************ 00:29:43.111 END TEST nvme_rpc 00:29:43.111 ************************************ 00:29:43.111 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 10:55:09 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:43.111 10:55:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.111 10:55:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.111 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:43.111 ************************************ 00:29:43.111 START TEST nvme_rpc_timeouts 00:29:43.111 ************************************ 00:29:43.111 10:55:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:43.111 * Looking for test storage... 00:29:43.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_150628 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_150628 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=150661 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:43.111 10:55:09 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 150661 00:29:43.111 10:55:09 -- common/autotest_common.sh@819 -- # '[' -z 150661 ']' 00:29:43.111 10:55:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.112 10:55:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:43.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.112 10:55:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.112 10:55:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:43.112 10:55:09 -- common/autotest_common.sh@10 -- # set +x 00:29:43.112 [2024-07-24 10:55:09.780785] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:43.112 [2024-07-24 10:55:09.781027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150661 ] 00:29:43.370 [2024-07-24 10:55:09.930863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:43.370 [2024-07-24 10:55:10.008322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:43.370 [2024-07-24 10:55:10.008681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.370 [2024-07-24 10:55:10.008671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.305 10:55:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:44.305 Checking default timeout settings: 00:29:44.305 10:55:10 -- common/autotest_common.sh@852 -- # return 0 00:29:44.305 10:55:10 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:44.305 10:55:10 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:44.563 Making settings changes with rpc: 00:29:44.563 10:55:11 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:44.563 10:55:11 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:44.821 Check default vs. modified settings: 00:29:44.821 10:55:11 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:44.821 10:55:11 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_150628 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_150628 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:45.080 Setting action_on_timeout is changed as expected. 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_150628 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_150628 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:45.080 Setting timeout_us is changed as expected. 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_150628 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_150628 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:45.080 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:45.338 10:55:11 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:45.338 Setting timeout_admin_us is changed as expected. 00:29:45.338 10:55:11 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:45.338 10:55:11 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:45.338 10:55:11 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:45.338 10:55:11 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_150628 /tmp/settings_modified_150628 00:29:45.338 10:55:11 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 150661 00:29:45.338 10:55:11 -- common/autotest_common.sh@926 -- # '[' -z 150661 ']' 00:29:45.338 10:55:11 -- common/autotest_common.sh@930 -- # kill -0 150661 00:29:45.338 10:55:11 -- common/autotest_common.sh@931 -- # uname 00:29:45.338 10:55:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:45.338 10:55:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150661 00:29:45.338 10:55:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:45.338 killing process with pid 150661 00:29:45.338 10:55:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:45.338 10:55:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150661' 00:29:45.338 10:55:11 -- common/autotest_common.sh@945 -- # kill 150661 00:29:45.338 10:55:11 -- common/autotest_common.sh@950 -- # wait 150661 00:29:45.596 RPC TIMEOUT SETTING TEST PASSED. 00:29:45.596 10:55:12 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:45.596 00:29:45.596 real 0m2.627s 00:29:45.596 user 0m5.434s 00:29:45.596 sys 0m0.624s 00:29:45.596 10:55:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.596 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:45.596 ************************************ 00:29:45.596 END TEST nvme_rpc_timeouts 00:29:45.596 ************************************ 00:29:45.855 10:55:12 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:29:45.855 10:55:12 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@268 -- # timing_exit lib 00:29:45.855 10:55:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:45.855 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:45.855 10:55:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:45.855 10:55:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:45.856 10:55:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:45.856 10:55:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:45.856 10:55:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:45.856 10:55:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:45.856 10:55:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:45.856 10:55:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:45.856 10:55:12 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:29:45.856 10:55:12 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:45.856 10:55:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:45.856 10:55:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:45.856 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:45.856 ************************************ 00:29:45.856 START TEST blockdev_raid5f 00:29:45.856 ************************************ 00:29:45.856 10:55:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:45.856 * Looking for test storage... 00:29:45.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:45.856 10:55:12 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:45.856 10:55:12 -- bdev/nbd_common.sh@6 -- # set -e 00:29:45.856 10:55:12 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:45.856 10:55:12 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:45.856 10:55:12 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:45.856 10:55:12 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:45.856 10:55:12 -- bdev/blockdev.sh@18 -- # : 00:29:45.856 10:55:12 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:45.856 10:55:12 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:45.856 10:55:12 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:45.856 10:55:12 -- bdev/blockdev.sh@672 -- # uname -s 00:29:45.856 10:55:12 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:45.856 10:55:12 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:45.856 10:55:12 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:29:45.856 10:55:12 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:45.856 10:55:12 -- bdev/blockdev.sh@682 -- # dek= 00:29:45.856 10:55:12 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:45.856 10:55:12 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:45.856 10:55:12 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:45.856 10:55:12 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:29:45.856 10:55:12 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:29:45.856 10:55:12 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:45.856 10:55:12 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=150787 00:29:45.856 10:55:12 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:45.856 10:55:12 -- bdev/blockdev.sh@47 -- # waitforlisten 150787 00:29:45.856 10:55:12 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:45.856 10:55:12 -- common/autotest_common.sh@819 -- # '[' -z 150787 ']' 00:29:45.856 10:55:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.856 10:55:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:45.856 10:55:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.856 10:55:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:45.856 10:55:12 -- common/autotest_common.sh@10 -- # set +x 00:29:45.856 [2024-07-24 10:55:12.502532] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:45.856 [2024-07-24 10:55:12.502784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150787 ] 00:29:46.114 [2024-07-24 10:55:12.646708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.114 [2024-07-24 10:55:12.741962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:46.114 [2024-07-24 10:55:12.742210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.048 10:55:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:47.048 10:55:13 -- common/autotest_common.sh@852 -- # return 0 00:29:47.048 10:55:13 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:47.048 10:55:13 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:29:47.048 10:55:13 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:29:47.048 10:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.048 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 Malloc0 00:29:47.048 Malloc1 00:29:47.048 Malloc2 00:29:47.048 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.048 10:55:13 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:47.048 10:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.048 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.048 10:55:13 -- bdev/blockdev.sh@738 -- # cat 00:29:47.048 10:55:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:47.048 10:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.048 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.048 10:55:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:47.048 10:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.048 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.048 10:55:13 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:47.048 10:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.048 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.048 10:55:13 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:47.048 10:55:13 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:47.048 10:55:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:47.048 10:55:13 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:47.048 10:55:13 -- common/autotest_common.sh@10 -- # set +x 00:29:47.048 10:55:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:47.048 10:55:13 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:47.048 10:55:13 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:47.048 10:55:13 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2af0ef1f-63de-4a49-b33e-aa0d84f6e34a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2af0ef1f-63de-4a49-b33e-aa0d84f6e34a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2af0ef1f-63de-4a49-b33e-aa0d84f6e34a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "da4aa018-3962-4d87-bc0d-722b3a981a30",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ddee1ccb-e227-418e-a439-94c28487b3c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6a6c10d7-bcb9-4d3c-b65a-254e43b8f263",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:47.306 10:55:13 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:47.306 10:55:13 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:29:47.306 10:55:13 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:47.306 10:55:13 -- bdev/blockdev.sh@752 -- # killprocess 150787 00:29:47.306 10:55:13 -- common/autotest_common.sh@926 -- # '[' -z 150787 ']' 00:29:47.306 10:55:13 -- common/autotest_common.sh@930 -- # kill -0 150787 00:29:47.306 10:55:13 -- common/autotest_common.sh@931 -- # uname 00:29:47.306 10:55:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:47.306 10:55:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150787 00:29:47.306 10:55:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:47.306 10:55:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:47.306 killing process with pid 150787 00:29:47.306 10:55:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150787' 00:29:47.306 10:55:13 -- common/autotest_common.sh@945 -- # kill 150787 00:29:47.306 10:55:13 -- common/autotest_common.sh@950 -- # wait 150787 00:29:47.871 10:55:14 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:47.871 10:55:14 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:47.871 10:55:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:47.871 10:55:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:47.871 10:55:14 -- common/autotest_common.sh@10 -- # set +x 00:29:47.871 ************************************ 00:29:47.871 START TEST bdev_hello_world 00:29:47.871 ************************************ 00:29:47.871 10:55:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:47.871 [2024-07-24 10:55:14.331406] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:47.871 [2024-07-24 10:55:14.331721] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150836 ] 00:29:47.871 [2024-07-24 10:55:14.473970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.871 [2024-07-24 10:55:14.553397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.130 [2024-07-24 10:55:14.784629] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:48.130 [2024-07-24 10:55:14.784725] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:29:48.130 [2024-07-24 10:55:14.784772] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:48.130 [2024-07-24 10:55:14.785214] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:48.130 [2024-07-24 10:55:14.785413] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:48.130 [2024-07-24 10:55:14.785481] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:48.130 [2024-07-24 10:55:14.785570] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:48.130 00:29:48.130 [2024-07-24 10:55:14.785634] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:48.695 00:29:48.695 real 0m0.809s 00:29:48.695 user 0m0.458s 00:29:48.695 sys 0m0.238s 00:29:48.695 10:55:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.695 10:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:48.695 ************************************ 00:29:48.695 END TEST bdev_hello_world 00:29:48.695 ************************************ 00:29:48.695 10:55:15 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:48.695 10:55:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:48.695 10:55:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.695 10:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:48.695 ************************************ 00:29:48.695 START TEST bdev_bounds 00:29:48.695 ************************************ 00:29:48.695 10:55:15 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:48.695 10:55:15 -- bdev/blockdev.sh@288 -- # bdevio_pid=150874 00:29:48.695 10:55:15 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:48.695 Process bdevio pid: 150874 00:29:48.695 10:55:15 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:48.695 10:55:15 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 150874' 00:29:48.695 10:55:15 -- bdev/blockdev.sh@291 -- # waitforlisten 150874 00:29:48.695 10:55:15 -- common/autotest_common.sh@819 -- # '[' -z 150874 ']' 00:29:48.695 10:55:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.695 10:55:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:48.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.695 10:55:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.695 10:55:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:48.695 10:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:48.695 [2024-07-24 10:55:15.197594] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:48.695 [2024-07-24 10:55:15.197829] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150874 ] 00:29:48.695 [2024-07-24 10:55:15.356154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.953 [2024-07-24 10:55:15.444672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.953 [2024-07-24 10:55:15.444816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.953 [2024-07-24 10:55:15.444827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.518 10:55:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:49.518 10:55:16 -- common/autotest_common.sh@852 -- # return 0 00:29:49.518 10:55:16 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:49.775 I/O targets: 00:29:49.775 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:29:49.775 00:29:49.775 00:29:49.775 CUnit - A unit testing framework for C - Version 2.1-3 00:29:49.775 http://cunit.sourceforge.net/ 00:29:49.775 00:29:49.775 00:29:49.775 Suite: bdevio tests on: raid5f 00:29:49.775 Test: blockdev write read block ...passed 00:29:49.775 Test: blockdev write zeroes read block ...passed 00:29:49.775 Test: blockdev write zeroes read no split ...passed 00:29:49.775 Test: blockdev write zeroes read split ...passed 00:29:49.775 Test: blockdev write zeroes read split partial ...passed 00:29:49.775 Test: blockdev reset ...passed 00:29:49.775 Test: blockdev write read 8 blocks ...passed 00:29:49.775 Test: blockdev write read size > 128k ...passed 00:29:49.775 Test: blockdev write read invalid size ...passed 00:29:49.775 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:49.775 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:49.775 Test: blockdev write read max offset ...passed 00:29:49.775 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:49.775 Test: blockdev writev readv 8 blocks ...passed 00:29:49.775 Test: blockdev writev readv 30 x 1block ...passed 00:29:49.775 Test: blockdev writev readv block ...passed 00:29:49.775 Test: blockdev writev readv size > 128k ...passed 00:29:49.775 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:49.775 Test: blockdev comparev and writev ...passed 00:29:49.775 Test: blockdev nvme passthru rw ...passed 00:29:49.775 Test: blockdev nvme passthru vendor specific ...passed 00:29:49.775 Test: blockdev nvme admin passthru ...passed 00:29:49.775 Test: blockdev copy ...passed 00:29:49.775 00:29:49.775 Run Summary: Type Total Ran Passed Failed Inactive 00:29:49.775 suites 1 1 n/a 0 0 00:29:49.775 tests 23 23 23 0 0 00:29:49.775 asserts 130 130 130 0 n/a 00:29:49.775 00:29:49.775 Elapsed time = 0.362 seconds 00:29:49.775 0 00:29:49.775 10:55:16 -- bdev/blockdev.sh@293 -- # killprocess 150874 00:29:49.775 10:55:16 -- common/autotest_common.sh@926 -- # '[' -z 150874 ']' 00:29:49.775 10:55:16 -- common/autotest_common.sh@930 -- # kill -0 150874 00:29:49.775 10:55:16 -- common/autotest_common.sh@931 -- # uname 00:29:49.775 10:55:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:49.775 10:55:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150874 00:29:50.034 10:55:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:50.034 10:55:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:50.034 killing process with pid 150874 00:29:50.034 10:55:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150874' 00:29:50.034 10:55:16 -- common/autotest_common.sh@945 -- # kill 150874 00:29:50.034 10:55:16 -- common/autotest_common.sh@950 -- # wait 150874 00:29:50.292 10:55:16 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:50.292 00:29:50.292 real 0m1.656s 00:29:50.292 user 0m4.087s 00:29:50.292 sys 0m0.325s 00:29:50.292 10:55:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.292 ************************************ 00:29:50.292 10:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:50.292 END TEST bdev_bounds 00:29:50.292 ************************************ 00:29:50.292 10:55:16 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:50.292 10:55:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:50.292 10:55:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.292 10:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:50.292 ************************************ 00:29:50.292 START TEST bdev_nbd 00:29:50.292 ************************************ 00:29:50.292 10:55:16 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:50.292 10:55:16 -- bdev/blockdev.sh@298 -- # uname -s 00:29:50.292 10:55:16 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:50.292 10:55:16 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:50.292 10:55:16 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:50.293 10:55:16 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:29:50.293 10:55:16 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:50.293 10:55:16 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:50.293 10:55:16 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:50.293 10:55:16 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:50.293 10:55:16 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:50.293 10:55:16 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:50.293 10:55:16 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:50.293 10:55:16 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:50.293 10:55:16 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:29:50.293 10:55:16 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:50.293 10:55:16 -- bdev/blockdev.sh@316 -- # nbd_pid=150931 00:29:50.293 10:55:16 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:50.293 10:55:16 -- bdev/blockdev.sh@318 -- # waitforlisten 150931 /var/tmp/spdk-nbd.sock 00:29:50.293 10:55:16 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:50.293 10:55:16 -- common/autotest_common.sh@819 -- # '[' -z 150931 ']' 00:29:50.293 10:55:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:50.293 10:55:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:50.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:50.293 10:55:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:50.293 10:55:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:50.293 10:55:16 -- common/autotest_common.sh@10 -- # set +x 00:29:50.293 [2024-07-24 10:55:16.909649] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:50.293 [2024-07-24 10:55:16.910438] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.552 [2024-07-24 10:55:17.053918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.552 [2024-07-24 10:55:17.142283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.485 10:55:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:51.485 10:55:17 -- common/autotest_common.sh@852 -- # return 0 00:29:51.485 10:55:17 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@24 -- # local i 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:51.485 10:55:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:29:51.485 10:55:18 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:51.485 10:55:18 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:51.485 10:55:18 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:51.485 10:55:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:51.485 10:55:18 -- common/autotest_common.sh@857 -- # local i 00:29:51.485 10:55:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:51.485 10:55:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:51.485 10:55:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:51.485 10:55:18 -- common/autotest_common.sh@861 -- # break 00:29:51.485 10:55:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:51.485 10:55:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:51.485 10:55:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:51.485 1+0 records in 00:29:51.485 1+0 records out 00:29:51.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618686 s, 6.6 MB/s 00:29:51.485 10:55:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.485 10:55:18 -- common/autotest_common.sh@874 -- # size=4096 00:29:51.485 10:55:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:51.485 10:55:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:51.485 10:55:18 -- common/autotest_common.sh@877 -- # return 0 00:29:51.485 10:55:18 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:51.485 10:55:18 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:51.485 10:55:18 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:51.743 { 00:29:51.743 "nbd_device": "/dev/nbd0", 00:29:51.743 "bdev_name": "raid5f" 00:29:51.743 } 00:29:51.743 ]' 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:51.743 { 00:29:51.743 "nbd_device": "/dev/nbd0", 00:29:51.743 "bdev_name": "raid5f" 00:29:51.743 } 00:29:51.743 ]' 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@51 -- # local i 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:51.743 10:55:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@41 -- # break 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@45 -- # return 0 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.000 10:55:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:52.259 10:55:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:52.259 10:55:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:52.259 10:55:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:52.517 10:55:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:52.517 10:55:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@65 -- # true 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@65 -- # count=0 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@122 -- # count=0 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@127 -- # return 0 00:29:52.518 10:55:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@12 -- # local i 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:52.518 10:55:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:29:52.775 /dev/nbd0 00:29:52.775 10:55:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:52.775 10:55:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:52.775 10:55:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:52.775 10:55:19 -- common/autotest_common.sh@857 -- # local i 00:29:52.775 10:55:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:52.775 10:55:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:52.775 10:55:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:52.775 10:55:19 -- common/autotest_common.sh@861 -- # break 00:29:52.775 10:55:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:52.775 10:55:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:52.775 10:55:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:52.775 1+0 records in 00:29:52.775 1+0 records out 00:29:52.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315596 s, 13.0 MB/s 00:29:52.776 10:55:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.776 10:55:19 -- common/autotest_common.sh@874 -- # size=4096 00:29:52.776 10:55:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:52.776 10:55:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:52.776 10:55:19 -- common/autotest_common.sh@877 -- # return 0 00:29:52.776 10:55:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:52.776 10:55:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:52.776 10:55:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:52.776 10:55:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:52.776 10:55:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:53.034 { 00:29:53.034 "nbd_device": "/dev/nbd0", 00:29:53.034 "bdev_name": "raid5f" 00:29:53.034 } 00:29:53.034 ]' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:53.034 { 00:29:53.034 "nbd_device": "/dev/nbd0", 00:29:53.034 "bdev_name": "raid5f" 00:29:53.034 } 00:29:53.034 ]' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@65 -- # count=1 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@95 -- # count=1 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:53.034 256+0 records in 00:29:53.034 256+0 records out 00:29:53.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470397 s, 223 MB/s 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:53.034 256+0 records in 00:29:53.034 256+0 records out 00:29:53.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0368246 s, 28.5 MB/s 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@51 -- # local i 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:53.034 10:55:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@41 -- # break 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@45 -- # return 0 00:29:53.292 10:55:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:53.293 10:55:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.293 10:55:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:53.550 10:55:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:53.550 10:55:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:53.550 10:55:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:53.808 10:55:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:53.808 10:55:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:53.808 10:55:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:53.808 10:55:20 -- bdev/nbd_common.sh@65 -- # true 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@65 -- # count=0 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@104 -- # count=0 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@109 -- # return 0 00:29:53.809 10:55:20 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:53.809 10:55:20 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:53.809 malloc_lvol_verify 00:29:54.075 10:55:20 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:54.340 c40e8d4d-4f1f-49bd-b64a-68bd3fa86300 00:29:54.340 10:55:20 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:54.598 d4239f6b-2a9d-44c0-bf60-7bb84a1a1869 00:29:54.598 10:55:21 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:54.856 /dev/nbd0 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:54.856 mke2fs 1.46.5 (30-Dec-2021) 00:29:54.856 00:29:54.856 Filesystem too small for a journal 00:29:54.856 Discarding device blocks: 0/1024 done 00:29:54.856 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:54.856 00:29:54.856 Allocating group tables: 0/1 done 00:29:54.856 Writing inode tables: 0/1 done 00:29:54.856 Writing superblocks and filesystem accounting information: 0/1 done 00:29:54.856 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@51 -- # local i 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:54.856 10:55:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@41 -- # break 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@45 -- # return 0 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:55.114 10:55:21 -- bdev/nbd_common.sh@147 -- # return 0 00:29:55.114 10:55:21 -- bdev/blockdev.sh@324 -- # killprocess 150931 00:29:55.114 10:55:21 -- common/autotest_common.sh@926 -- # '[' -z 150931 ']' 00:29:55.114 10:55:21 -- common/autotest_common.sh@930 -- # kill -0 150931 00:29:55.114 10:55:21 -- common/autotest_common.sh@931 -- # uname 00:29:55.114 10:55:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:55.114 10:55:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 150931 00:29:55.114 10:55:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:55.114 10:55:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:55.114 10:55:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 150931' 00:29:55.114 killing process with pid 150931 00:29:55.114 10:55:21 -- common/autotest_common.sh@945 -- # kill 150931 00:29:55.114 10:55:21 -- common/autotest_common.sh@950 -- # wait 150931 00:29:55.372 10:55:21 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:55.372 00:29:55.372 real 0m5.150s 00:29:55.372 user 0m7.992s 00:29:55.372 sys 0m1.117s 00:29:55.372 10:55:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:55.372 10:55:21 -- common/autotest_common.sh@10 -- # set +x 00:29:55.372 ************************************ 00:29:55.372 END TEST bdev_nbd 00:29:55.372 ************************************ 00:29:55.372 10:55:22 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:55.372 10:55:22 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:29:55.372 10:55:22 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:29:55.372 10:55:22 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:29:55.372 10:55:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:55.372 10:55:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.372 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:55.372 ************************************ 00:29:55.372 START TEST bdev_fio 00:29:55.372 ************************************ 00:29:55.373 10:55:22 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:29:55.373 10:55:22 -- bdev/blockdev.sh@329 -- # local env_context 00:29:55.373 10:55:22 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:55.373 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:55.373 10:55:22 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:55.373 10:55:22 -- bdev/blockdev.sh@337 -- # echo '' 00:29:55.373 10:55:22 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:29:55.631 10:55:22 -- bdev/blockdev.sh@337 -- # env_context= 00:29:55.632 10:55:22 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:55.632 10:55:22 -- common/autotest_common.sh@1260 -- # local workload=verify 00:29:55.632 10:55:22 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:29:55.632 10:55:22 -- common/autotest_common.sh@1262 -- # local env_context= 00:29:55.632 10:55:22 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:29:55.632 10:55:22 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:55.632 10:55:22 -- common/autotest_common.sh@1280 -- # cat 00:29:55.632 10:55:22 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1293 -- # cat 00:29:55.632 10:55:22 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:29:55.632 10:55:22 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:55.632 10:55:22 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:29:55.632 10:55:22 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:29:55.632 10:55:22 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:29:55.632 10:55:22 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:29:55.632 10:55:22 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:55.632 10:55:22 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:55.632 10:55:22 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.632 10:55:22 -- common/autotest_common.sh@10 -- # set +x 00:29:55.632 ************************************ 00:29:55.632 START TEST bdev_fio_rw_verify 00:29:55.632 ************************************ 00:29:55.632 10:55:22 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:55.632 10:55:22 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:55.632 10:55:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:55.632 10:55:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:55.632 10:55:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:55.632 10:55:22 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.632 10:55:22 -- common/autotest_common.sh@1320 -- # shift 00:29:55.632 10:55:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:55.632 10:55:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.632 10:55:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:55.632 10:55:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:55.632 10:55:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:55.632 10:55:22 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:55.632 10:55:22 -- common/autotest_common.sh@1326 -- # break 00:29:55.632 10:55:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:55.632 10:55:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:55.632 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:55.632 fio-3.35 00:29:55.632 Starting 1 thread 00:30:07.936 00:30:07.936 job_raid5f: (groupid=0, jobs=1): err= 0: pid=151154: Wed Jul 24 10:55:32 2024 00:30:07.936 read: IOPS=9676, BW=37.8MiB/s (39.6MB/s)(378MiB/10001msec) 00:30:07.936 slat (usec): min=18, max=965, avg=24.73, stdev= 7.55 00:30:07.936 clat (usec): min=11, max=1494, avg=162.84, stdev=63.45 00:30:07.936 lat (usec): min=34, max=1520, avg=187.57, stdev=64.93 00:30:07.936 clat percentiles (usec): 00:30:07.936 | 50.000th=[ 161], 99.000th=[ 302], 99.900th=[ 383], 99.990th=[ 807], 00:30:07.936 | 99.999th=[ 1500] 00:30:07.936 write: IOPS=10.2k, BW=39.8MiB/s (41.8MB/s)(394MiB/9889msec); 0 zone resets 00:30:07.936 slat (usec): min=9, max=559, avg=21.46, stdev= 6.87 00:30:07.936 clat (usec): min=61, max=1156, avg=375.15, stdev=64.30 00:30:07.936 lat (usec): min=78, max=1370, avg=396.61, stdev=66.65 00:30:07.936 clat percentiles (usec): 00:30:07.936 | 50.000th=[ 375], 99.000th=[ 529], 99.900th=[ 799], 99.990th=[ 1037], 00:30:07.936 | 99.999th=[ 1106] 00:30:07.936 bw ( KiB/s): min=35248, max=45776, per=98.71%, avg=40256.00, stdev=2914.34, samples=19 00:30:07.936 iops : min= 8812, max=11444, avg=10064.21, stdev=728.33, samples=19 00:30:07.936 lat (usec) : 20=0.01%, 50=0.01%, 100=9.86%, 250=35.73%, 500=53.18% 00:30:07.936 lat (usec) : 750=1.16%, 1000=0.06% 00:30:07.936 lat (msec) : 2=0.01% 00:30:07.936 cpu : usr=99.03%, sys=0.88%, ctx=208, majf=0, minf=9997 00:30:07.936 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:07.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.936 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:07.936 issued rwts: total=96775,100825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:07.936 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:07.936 00:30:07.936 Run status group 0 (all jobs): 00:30:07.936 READ: bw=37.8MiB/s (39.6MB/s), 37.8MiB/s-37.8MiB/s (39.6MB/s-39.6MB/s), io=378MiB (396MB), run=10001-10001msec 00:30:07.936 WRITE: bw=39.8MiB/s (41.8MB/s), 39.8MiB/s-39.8MiB/s (41.8MB/s-41.8MB/s), io=394MiB (413MB), run=9889-9889msec 00:30:07.936 ----------------------------------------------------- 00:30:07.936 Suppressions used: 00:30:07.936 count bytes template 00:30:07.936 1 7 /usr/src/fio/parse.c 00:30:07.936 1013 97248 /usr/src/fio/iolog.c 00:30:07.936 1 904 libcrypto.so 00:30:07.936 ----------------------------------------------------- 00:30:07.936 00:30:07.936 00:30:07.936 real 0m11.284s 00:30:07.936 user 0m11.926s 00:30:07.936 sys 0m0.664s 00:30:07.936 10:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.936 10:55:33 -- common/autotest_common.sh@10 -- # set +x 00:30:07.936 ************************************ 00:30:07.936 END TEST bdev_fio_rw_verify 00:30:07.936 ************************************ 00:30:07.936 10:55:33 -- bdev/blockdev.sh@348 -- # rm -f 00:30:07.936 10:55:33 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.936 10:55:33 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:30:07.936 10:55:33 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.936 10:55:33 -- common/autotest_common.sh@1260 -- # local workload=trim 00:30:07.936 10:55:33 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:30:07.936 10:55:33 -- common/autotest_common.sh@1262 -- # local env_context= 00:30:07.936 10:55:33 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:30:07.936 10:55:33 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:30:07.936 10:55:33 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:30:07.936 10:55:33 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:30:07.936 10:55:33 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.936 10:55:33 -- common/autotest_common.sh@1280 -- # cat 00:30:07.936 10:55:33 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:30:07.936 10:55:33 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:30:07.936 10:55:33 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:30:07.937 10:55:33 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2af0ef1f-63de-4a49-b33e-aa0d84f6e34a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2af0ef1f-63de-4a49-b33e-aa0d84f6e34a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2af0ef1f-63de-4a49-b33e-aa0d84f6e34a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "da4aa018-3962-4d87-bc0d-722b3a981a30",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ddee1ccb-e227-418e-a439-94c28487b3c9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6a6c10d7-bcb9-4d3c-b65a-254e43b8f263",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:30:07.937 10:55:33 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:30:07.937 10:55:33 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:30:07.937 10:55:33 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:30:07.937 /home/vagrant/spdk_repo/spdk 00:30:07.937 10:55:33 -- bdev/blockdev.sh@360 -- # popd 00:30:07.937 10:55:33 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:30:07.937 10:55:33 -- bdev/blockdev.sh@362 -- # return 0 00:30:07.937 00:30:07.937 real 0m11.466s 00:30:07.937 user 0m12.038s 00:30:07.937 sys 0m0.735s 00:30:07.937 10:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.937 10:55:33 -- common/autotest_common.sh@10 -- # set +x 00:30:07.937 ************************************ 00:30:07.937 END TEST bdev_fio 00:30:07.937 ************************************ 00:30:07.937 10:55:33 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:07.937 10:55:33 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:07.937 10:55:33 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:07.937 10:55:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:07.937 10:55:33 -- common/autotest_common.sh@10 -- # set +x 00:30:07.937 ************************************ 00:30:07.937 START TEST bdev_verify 00:30:07.937 ************************************ 00:30:07.937 10:55:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:07.937 [2024-07-24 10:55:33.639935] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:07.937 [2024-07-24 10:55:33.640217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151320 ] 00:30:07.937 [2024-07-24 10:55:33.803251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:07.937 [2024-07-24 10:55:33.887998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.937 [2024-07-24 10:55:33.888010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.937 Running I/O for 5 seconds... 00:30:13.228 00:30:13.228 Latency(us) 00:30:13.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.228 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.228 Verification LBA range: start 0x0 length 0x2000 00:30:13.228 raid5f : 5.01 9982.84 39.00 0.00 0.00 20300.04 454.28 27644.28 00:30:13.228 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:13.228 Verification LBA range: start 0x2000 length 0x2000 00:30:13.228 raid5f : 5.01 9846.88 38.46 0.00 0.00 20595.34 216.90 16443.58 00:30:13.228 =================================================================================================================== 00:30:13.228 Total : 19829.72 77.46 0.00 0.00 20446.69 216.90 27644.28 00:30:13.228 00:30:13.228 real 0m5.853s 00:30:13.228 user 0m10.867s 00:30:13.228 sys 0m0.265s 00:30:13.228 10:55:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:13.228 10:55:39 -- common/autotest_common.sh@10 -- # set +x 00:30:13.228 ************************************ 00:30:13.228 END TEST bdev_verify 00:30:13.228 ************************************ 00:30:13.228 10:55:39 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:13.228 10:55:39 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:13.228 10:55:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.228 10:55:39 -- common/autotest_common.sh@10 -- # set +x 00:30:13.228 ************************************ 00:30:13.228 START TEST bdev_verify_big_io 00:30:13.228 ************************************ 00:30:13.228 10:55:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:13.228 [2024-07-24 10:55:39.536405] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:13.228 [2024-07-24 10:55:39.536666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151417 ] 00:30:13.228 [2024-07-24 10:55:39.692460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:13.228 [2024-07-24 10:55:39.792990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.228 [2024-07-24 10:55:39.793007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.485 Running I/O for 5 seconds... 00:30:18.808 00:30:18.808 Latency(us) 00:30:18.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:18.808 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:18.808 Verification LBA range: start 0x0 length 0x200 00:30:18.808 raid5f : 5.15 677.19 42.32 0.00 0.00 4925840.71 184.32 150613.64 00:30:18.808 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:18.808 Verification LBA range: start 0x200 length 0x200 00:30:18.808 raid5f : 5.14 685.99 42.87 0.00 0.00 4869286.30 258.79 146800.64 00:30:18.808 =================================================================================================================== 00:30:18.808 Total : 1363.18 85.20 0.00 0.00 4897406.29 184.32 150613.64 00:30:18.808 00:30:18.808 real 0m6.000s 00:30:18.808 user 0m11.163s 00:30:18.808 sys 0m0.284s 00:30:18.808 10:55:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:18.808 10:55:45 -- common/autotest_common.sh@10 -- # set +x 00:30:18.808 ************************************ 00:30:18.808 END TEST bdev_verify_big_io 00:30:18.808 ************************************ 00:30:19.067 10:55:45 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:19.067 10:55:45 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:19.067 10:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:19.067 10:55:45 -- common/autotest_common.sh@10 -- # set +x 00:30:19.067 ************************************ 00:30:19.067 START TEST bdev_write_zeroes 00:30:19.067 ************************************ 00:30:19.067 10:55:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:19.067 [2024-07-24 10:55:45.599770] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:19.067 [2024-07-24 10:55:45.600048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151506 ] 00:30:19.067 [2024-07-24 10:55:45.749004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.325 [2024-07-24 10:55:45.834832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.584 Running I/O for 1 seconds... 00:30:20.516 00:30:20.516 Latency(us) 00:30:20.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.516 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:20.516 raid5f : 1.01 21121.67 82.51 0.00 0.00 6036.89 1720.32 7179.17 00:30:20.516 =================================================================================================================== 00:30:20.516 Total : 21121.67 82.51 0.00 0.00 6036.89 1720.32 7179.17 00:30:20.773 00:30:20.773 real 0m1.841s 00:30:20.773 user 0m1.490s 00:30:20.773 sys 0m0.226s 00:30:20.773 10:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:20.773 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:30:20.773 ************************************ 00:30:20.773 END TEST bdev_write_zeroes 00:30:20.773 ************************************ 00:30:20.773 10:55:47 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:20.773 10:55:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:20.773 10:55:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:20.773 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:30:20.773 ************************************ 00:30:20.773 START TEST bdev_json_nonenclosed 00:30:20.773 ************************************ 00:30:20.773 10:55:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:21.031 [2024-07-24 10:55:47.490458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:21.031 [2024-07-24 10:55:47.490810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151557 ] 00:30:21.031 [2024-07-24 10:55:47.641352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.584 [2024-07-24 10:55:47.737658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.584 [2024-07-24 10:55:47.737939] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:21.584 [2024-07-24 10:55:47.737980] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:21.584 00:30:21.584 real 0m0.430s 00:30:21.584 user 0m0.197s 00:30:21.584 sys 0m0.133s 00:30:21.584 10:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.584 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:30:21.584 ************************************ 00:30:21.584 END TEST bdev_json_nonenclosed 00:30:21.584 ************************************ 00:30:21.584 10:55:47 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:21.584 10:55:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:21.584 10:55:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:21.584 10:55:47 -- common/autotest_common.sh@10 -- # set +x 00:30:21.584 ************************************ 00:30:21.584 START TEST bdev_json_nonarray 00:30:21.584 ************************************ 00:30:21.584 10:55:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:21.584 [2024-07-24 10:55:47.974183] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:21.584 [2024-07-24 10:55:47.974483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151580 ] 00:30:21.584 [2024-07-24 10:55:48.127224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.584 [2024-07-24 10:55:48.226818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.584 [2024-07-24 10:55:48.227114] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:21.584 [2024-07-24 10:55:48.227169] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:21.843 00:30:21.843 real 0m0.437s 00:30:21.843 user 0m0.231s 00:30:21.843 sys 0m0.106s 00:30:21.843 10:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.843 10:55:48 -- common/autotest_common.sh@10 -- # set +x 00:30:21.843 ************************************ 00:30:21.843 END TEST bdev_json_nonarray 00:30:21.843 ************************************ 00:30:21.843 10:55:48 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:30:21.843 10:55:48 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:30:21.843 10:55:48 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:30:21.843 10:55:48 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:21.843 10:55:48 -- bdev/blockdev.sh@809 -- # cleanup 00:30:21.843 10:55:48 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:21.843 10:55:48 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:21.843 10:55:48 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:30:21.843 10:55:48 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:30:21.843 10:55:48 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:30:21.843 10:55:48 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:30:21.843 00:30:21.843 real 0m36.049s 00:30:21.843 user 0m50.819s 00:30:21.843 sys 0m4.145s 00:30:21.843 10:55:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.843 ************************************ 00:30:21.843 10:55:48 -- common/autotest_common.sh@10 -- # set +x 00:30:21.843 END TEST blockdev_raid5f 00:30:21.843 ************************************ 00:30:21.843 10:55:48 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:30:21.843 10:55:48 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:30:21.843 10:55:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:21.843 10:55:48 -- common/autotest_common.sh@10 -- # set +x 00:30:21.843 10:55:48 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:30:21.844 10:55:48 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:30:21.844 10:55:48 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:30:21.844 10:55:48 -- common/autotest_common.sh@10 -- # set +x 00:30:23.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:23.752 Waiting for block devices as requested 00:30:23.752 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:24.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:24.012 Cleaning 00:30:24.012 Removing: /var/run/dpdk/spdk0/config 00:30:24.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:24.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:24.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:24.012 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:24.012 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:24.012 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:24.012 Removing: /dev/shm/spdk_tgt_trace.pid115176 00:30:24.012 Removing: /var/run/dpdk/spdk0 00:30:24.012 Removing: /var/run/dpdk/spdk_pid114998 00:30:24.012 Removing: /var/run/dpdk/spdk_pid115176 00:30:24.012 Removing: /var/run/dpdk/spdk_pid115453 00:30:24.012 Removing: /var/run/dpdk/spdk_pid115702 00:30:24.012 Removing: /var/run/dpdk/spdk_pid115879 00:30:24.012 Removing: /var/run/dpdk/spdk_pid115959 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116046 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116146 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116231 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116277 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116319 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116391 00:30:24.012 Removing: /var/run/dpdk/spdk_pid116499 00:30:24.012 Removing: /var/run/dpdk/spdk_pid117017 00:30:24.012 Removing: /var/run/dpdk/spdk_pid117073 00:30:24.012 Removing: /var/run/dpdk/spdk_pid117133 00:30:24.012 Removing: /var/run/dpdk/spdk_pid117154 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117228 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117251 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117325 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117346 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117398 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117421 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117466 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117489 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117628 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117666 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117709 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117787 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117848 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117880 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117952 00:30:24.271 Removing: /var/run/dpdk/spdk_pid117984 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118026 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118052 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118097 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118120 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118165 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118195 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118236 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118271 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118311 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118339 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118379 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118406 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118449 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118472 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118517 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118540 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118585 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118606 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118653 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118683 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118721 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118751 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118796 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118819 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118865 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118890 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118931 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118966 00:30:24.271 Removing: /var/run/dpdk/spdk_pid118999 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119034 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119067 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119106 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119143 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119176 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119225 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119247 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119292 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119315 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119364 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119441 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119532 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119692 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119751 00:30:24.271 Removing: /var/run/dpdk/spdk_pid119789 00:30:24.271 Removing: /var/run/dpdk/spdk_pid120983 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121190 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121378 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121479 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121599 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121649 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121671 00:30:24.271 Removing: /var/run/dpdk/spdk_pid121709 00:30:24.271 Removing: /var/run/dpdk/spdk_pid122174 00:30:24.271 Removing: /var/run/dpdk/spdk_pid122254 00:30:24.271 Removing: /var/run/dpdk/spdk_pid122363 00:30:24.271 Removing: /var/run/dpdk/spdk_pid122410 00:30:24.271 Removing: /var/run/dpdk/spdk_pid123590 00:30:24.271 Removing: /var/run/dpdk/spdk_pid124470 00:30:24.271 Removing: /var/run/dpdk/spdk_pid125361 00:30:24.271 Removing: /var/run/dpdk/spdk_pid126489 00:30:24.271 Removing: /var/run/dpdk/spdk_pid127583 00:30:24.271 Removing: /var/run/dpdk/spdk_pid128664 00:30:24.271 Removing: /var/run/dpdk/spdk_pid130182 00:30:24.271 Removing: /var/run/dpdk/spdk_pid131410 00:30:24.271 Removing: /var/run/dpdk/spdk_pid132631 00:30:24.271 Removing: /var/run/dpdk/spdk_pid133316 00:30:24.271 Removing: /var/run/dpdk/spdk_pid133864 00:30:24.271 Removing: /var/run/dpdk/spdk_pid134506 00:30:24.271 Removing: /var/run/dpdk/spdk_pid134989 00:30:24.271 Removing: /var/run/dpdk/spdk_pid135569 00:30:24.271 Removing: /var/run/dpdk/spdk_pid136118 00:30:24.271 Removing: /var/run/dpdk/spdk_pid136792 00:30:24.271 Removing: /var/run/dpdk/spdk_pid137336 00:30:24.271 Removing: /var/run/dpdk/spdk_pid138740 00:30:24.271 Removing: /var/run/dpdk/spdk_pid139358 00:30:24.271 Removing: /var/run/dpdk/spdk_pid139912 00:30:24.271 Removing: /var/run/dpdk/spdk_pid141473 00:30:24.271 Removing: /var/run/dpdk/spdk_pid142167 00:30:24.530 Removing: /var/run/dpdk/spdk_pid142790 00:30:24.530 Removing: /var/run/dpdk/spdk_pid143571 00:30:24.530 Removing: /var/run/dpdk/spdk_pid143612 00:30:24.530 Removing: /var/run/dpdk/spdk_pid143651 00:30:24.530 Removing: /var/run/dpdk/spdk_pid143690 00:30:24.530 Removing: /var/run/dpdk/spdk_pid143820 00:30:24.530 Removing: /var/run/dpdk/spdk_pid143960 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144172 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144452 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144475 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144514 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144532 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144553 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144573 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144590 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144604 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144631 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144643 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144660 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144680 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144700 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144716 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144736 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144756 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144765 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144794 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144807 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144827 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144863 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144887 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144910 00:30:24.530 Removing: /var/run/dpdk/spdk_pid144979 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145021 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145036 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145068 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145083 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145097 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145145 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145163 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145195 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145206 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145221 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145238 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145243 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145260 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145272 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145282 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145315 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145356 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145361 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145403 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145412 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145427 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145481 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145493 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145524 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145545 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145550 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145567 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145583 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145589 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145605 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145618 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145693 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145752 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145862 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145885 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145933 00:30:24.530 Removing: /var/run/dpdk/spdk_pid145978 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146006 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146028 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146050 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146086 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146104 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146178 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146231 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146269 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146528 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146643 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146679 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146764 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146831 00:30:24.530 Removing: /var/run/dpdk/spdk_pid146869 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147108 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147291 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147386 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147424 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147455 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147529 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147948 00:30:24.530 Removing: /var/run/dpdk/spdk_pid147980 00:30:24.530 Removing: /var/run/dpdk/spdk_pid148283 00:30:24.794 Removing: /var/run/dpdk/spdk_pid148416 00:30:24.794 Removing: /var/run/dpdk/spdk_pid148505 00:30:24.794 Removing: /var/run/dpdk/spdk_pid148556 00:30:24.794 Removing: /var/run/dpdk/spdk_pid148578 00:30:24.794 Removing: /var/run/dpdk/spdk_pid148617 00:30:24.794 Removing: /var/run/dpdk/spdk_pid149935 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150049 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150054 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150080 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150566 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150661 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150787 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150836 00:30:24.794 Removing: /var/run/dpdk/spdk_pid150874 00:30:24.794 Removing: /var/run/dpdk/spdk_pid151144 00:30:24.794 Removing: /var/run/dpdk/spdk_pid151320 00:30:24.794 Removing: /var/run/dpdk/spdk_pid151417 00:30:24.794 Removing: /var/run/dpdk/spdk_pid151506 00:30:24.794 Removing: /var/run/dpdk/spdk_pid151557 00:30:24.794 Removing: /var/run/dpdk/spdk_pid151580 00:30:24.794 Clean 00:30:24.794 killing process with pid 104006 00:30:24.794 killing process with pid 104010 00:30:24.794 10:55:51 -- common/autotest_common.sh@1436 -- # return 0 00:30:24.794 10:55:51 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:30:24.794 10:55:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:24.794 10:55:51 -- common/autotest_common.sh@10 -- # set +x 00:30:24.794 10:55:51 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:30:24.794 10:55:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:24.794 10:55:51 -- common/autotest_common.sh@10 -- # set +x 00:30:25.065 10:55:51 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:25.065 10:55:51 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:25.065 10:55:51 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:25.065 10:55:51 -- spdk/autotest.sh@394 -- # hash lcov 00:30:25.065 10:55:51 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:25.065 10:55:51 -- spdk/autotest.sh@396 -- # hostname 00:30:25.065 10:55:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:25.065 geninfo: WARNING: invalid characters removed from testname! 00:31:11.734 10:56:36 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:17.000 10:56:42 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:19.532 10:56:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:22.818 10:56:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:26.101 10:56:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:29.395 10:56:56 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:32.677 10:56:59 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:32.936 10:56:59 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:32.936 10:56:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:32.936 10:56:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:32.936 10:56:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:32.936 10:56:59 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:32.936 10:56:59 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:32.936 10:56:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:32.936 10:56:59 -- paths/export.sh@5 -- $ export PATH 00:31:32.936 10:56:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:32.936 10:56:59 -- common/autobuild_common.sh@437 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:32.936 10:56:59 -- common/autobuild_common.sh@438 -- $ date +%s 00:31:32.936 10:56:59 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721818619.XXXXXX 00:31:32.936 10:56:59 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721818619.O8rfXF 00:31:32.936 10:56:59 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:31:32.936 10:56:59 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:31:32.936 10:56:59 -- common/autobuild_common.sh@445 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:31:32.936 10:56:59 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:31:32.936 10:56:59 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:32.936 10:56:59 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:32.936 10:56:59 -- common/autobuild_common.sh@454 -- $ get_config_params 00:31:32.936 10:56:59 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:32.936 10:56:59 -- common/autotest_common.sh@10 -- $ set +x 00:31:32.936 10:56:59 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:31:32.936 10:56:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:32.936 10:56:59 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:32.936 10:56:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:32.936 10:56:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:32.936 10:56:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:32.936 10:56:59 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:31:32.936 10:56:59 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:31:32.936 10:56:59 -- common/autotest_common.sh@10 -- $ set +x 00:31:32.936 10:56:59 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:31:32.936 10:56:59 -- spdk/autopackage.sh@36 -- $ [[ -n v22.11.4 ]] 00:31:32.936 10:56:59 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:31:32.936 10:56:59 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:31:32.936 10:56:59 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:32.936 10:56:59 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:31:32.936 10:56:59 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:31:32.936 10:56:59 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:31:32.936 10:56:59 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:31:32.936 10:56:59 -- spdk/autopackage.sh@40 -- $ get_config_params 00:31:32.936 10:56:59 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:32.936 10:56:59 -- common/autotest_common.sh@10 -- $ set +x 00:31:32.936 10:56:59 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:31:32.936 10:56:59 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto 00:31:32.936 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:31:32.936 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:31:32.936 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:31:32.936 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:33.195 Using 'verbs' RDMA provider 00:31:45.953 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:31:55.924 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:31:55.924 Creating mk/config.mk...done. 00:31:55.924 Creating mk/cc.flags.mk...done. 00:31:55.924 Type 'make' to build. 00:31:55.924 10:57:22 -- spdk/autopackage.sh@43 -- $ make -j10 00:31:55.924 make[1]: Nothing to be done for 'all'. 00:31:55.924 CC lib/log/log.o 00:31:55.924 CC lib/ut_mock/mock.o 00:31:55.924 CC lib/log/log_flags.o 00:31:55.924 CC lib/log/log_deprecated.o 00:31:55.924 CC lib/ut/ut.o 00:31:55.924 LIB libspdk_ut_mock.a 00:31:55.924 LIB libspdk_ut.a 00:31:56.184 LIB libspdk_log.a 00:31:56.184 CXX lib/trace_parser/trace.o 00:31:56.184 CC lib/dma/dma.o 00:31:56.184 CC lib/ioat/ioat.o 00:31:56.184 CC lib/util/base64.o 00:31:56.184 CC lib/util/bit_array.o 00:31:56.184 CC lib/util/cpuset.o 00:31:56.184 CC lib/util/crc16.o 00:31:56.184 CC lib/util/crc32.o 00:31:56.184 CC lib/util/crc32c.o 00:31:56.184 CC lib/vfio_user/host/vfio_user_pci.o 00:31:56.442 CC lib/vfio_user/host/vfio_user.o 00:31:56.442 CC lib/util/crc32_ieee.o 00:31:56.442 CC lib/util/crc64.o 00:31:56.442 LIB libspdk_dma.a 00:31:56.442 LIB libspdk_ioat.a 00:31:56.442 CC lib/util/dif.o 00:31:56.442 CC lib/util/fd.o 00:31:56.442 CC lib/util/file.o 00:31:56.442 CC lib/util/hexlify.o 00:31:56.442 CC lib/util/iov.o 00:31:56.442 CC lib/util/math.o 00:31:56.442 CC lib/util/pipe.o 00:31:56.442 CC lib/util/strerror_tls.o 00:31:56.442 LIB libspdk_vfio_user.a 00:31:56.442 CC lib/util/string.o 00:31:56.442 CC lib/util/uuid.o 00:31:56.442 CC lib/util/fd_group.o 00:31:56.700 CC lib/util/xor.o 00:31:56.700 CC lib/util/zipf.o 00:31:56.700 LIB libspdk_util.a 00:31:56.957 CC lib/vmd/vmd.o 00:31:56.957 CC lib/vmd/led.o 00:31:56.957 CC lib/idxd/idxd.o 00:31:56.957 CC lib/idxd/idxd_user.o 00:31:56.957 CC lib/rdma/common.o 00:31:56.957 CC lib/conf/conf.o 00:31:56.957 LIB libspdk_trace_parser.a 00:31:56.957 CC lib/rdma/rdma_verbs.o 00:31:56.957 CC lib/json/json_parse.o 00:31:56.957 CC lib/env_dpdk/env.o 00:31:56.957 CC lib/env_dpdk/memory.o 00:31:56.957 CC lib/env_dpdk/pci.o 00:31:56.957 CC lib/json/json_util.o 00:31:56.957 CC lib/env_dpdk/init.o 00:31:57.214 LIB libspdk_conf.a 00:31:57.214 LIB libspdk_rdma.a 00:31:57.214 CC lib/env_dpdk/threads.o 00:31:57.214 CC lib/env_dpdk/pci_ioat.o 00:31:57.214 CC lib/json/json_write.o 00:31:57.214 LIB libspdk_idxd.a 00:31:57.214 LIB libspdk_vmd.a 00:31:57.214 CC lib/env_dpdk/pci_virtio.o 00:31:57.214 CC lib/env_dpdk/pci_vmd.o 00:31:57.214 CC lib/env_dpdk/pci_idxd.o 00:31:57.214 CC lib/env_dpdk/pci_event.o 00:31:57.214 CC lib/env_dpdk/sigbus_handler.o 00:31:57.214 CC lib/env_dpdk/pci_dpdk.o 00:31:57.214 CC lib/env_dpdk/pci_dpdk_2207.o 00:31:57.214 CC lib/env_dpdk/pci_dpdk_2211.o 00:31:57.214 LIB libspdk_json.a 00:31:57.470 CC lib/jsonrpc/jsonrpc_server.o 00:31:57.470 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:31:57.470 CC lib/jsonrpc/jsonrpc_client.o 00:31:57.470 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:31:57.727 LIB libspdk_jsonrpc.a 00:31:57.727 CC lib/rpc/rpc.o 00:31:57.985 LIB libspdk_env_dpdk.a 00:31:57.985 LIB libspdk_rpc.a 00:31:57.985 CC lib/notify/notify.o 00:31:57.985 CC lib/notify/notify_rpc.o 00:31:57.985 CC lib/trace/trace.o 00:31:57.985 CC lib/trace/trace_flags.o 00:31:57.985 CC lib/trace/trace_rpc.o 00:31:57.985 CC lib/sock/sock.o 00:31:57.985 CC lib/sock/sock_rpc.o 00:31:58.242 LIB libspdk_notify.a 00:31:58.242 LIB libspdk_trace.a 00:31:58.242 LIB libspdk_sock.a 00:31:58.500 CC lib/thread/thread.o 00:31:58.500 CC lib/thread/iobuf.o 00:31:58.500 CC lib/nvme/nvme_ctrlr_cmd.o 00:31:58.500 CC lib/nvme/nvme_ctrlr.o 00:31:58.500 CC lib/nvme/nvme_ns_cmd.o 00:31:58.500 CC lib/nvme/nvme_fabric.o 00:31:58.500 CC lib/nvme/nvme_ns.o 00:31:58.500 CC lib/nvme/nvme_pcie_common.o 00:31:58.500 CC lib/nvme/nvme_qpair.o 00:31:58.500 CC lib/nvme/nvme_pcie.o 00:31:58.500 CC lib/nvme/nvme.o 00:31:59.065 LIB libspdk_thread.a 00:31:59.065 CC lib/nvme/nvme_quirks.o 00:31:59.065 CC lib/nvme/nvme_transport.o 00:31:59.065 CC lib/nvme/nvme_discovery.o 00:31:59.065 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:31:59.065 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:31:59.065 CC lib/nvme/nvme_tcp.o 00:31:59.065 CC lib/nvme/nvme_opal.o 00:31:59.065 CC lib/nvme/nvme_io_msg.o 00:31:59.323 CC lib/nvme/nvme_poll_group.o 00:31:59.323 CC lib/accel/accel.o 00:31:59.323 CC lib/nvme/nvme_zns.o 00:31:59.581 CC lib/nvme/nvme_cuse.o 00:31:59.581 CC lib/blob/blobstore.o 00:31:59.581 CC lib/blob/request.o 00:31:59.581 CC lib/init/json_config.o 00:31:59.581 CC lib/virtio/virtio.o 00:31:59.581 CC lib/virtio/virtio_vhost_user.o 00:31:59.839 CC lib/virtio/virtio_vfio_user.o 00:31:59.839 CC lib/virtio/virtio_pci.o 00:31:59.839 CC lib/init/subsystem.o 00:31:59.839 CC lib/init/subsystem_rpc.o 00:31:59.839 CC lib/accel/accel_rpc.o 00:31:59.839 CC lib/accel/accel_sw.o 00:31:59.839 CC lib/nvme/nvme_vfio_user.o 00:31:59.839 CC lib/nvme/nvme_rdma.o 00:31:59.839 CC lib/blob/zeroes.o 00:31:59.839 LIB libspdk_virtio.a 00:31:59.839 CC lib/blob/blob_bs_dev.o 00:31:59.839 CC lib/init/rpc.o 00:32:00.097 LIB libspdk_accel.a 00:32:00.097 LIB libspdk_init.a 00:32:00.097 CC lib/bdev/bdev_rpc.o 00:32:00.097 CC lib/bdev/bdev.o 00:32:00.097 CC lib/bdev/part.o 00:32:00.097 CC lib/bdev/bdev_zone.o 00:32:00.097 CC lib/bdev/scsi_nvme.o 00:32:00.097 CC lib/event/app.o 00:32:00.097 CC lib/event/reactor.o 00:32:00.356 CC lib/event/log_rpc.o 00:32:00.356 CC lib/event/app_rpc.o 00:32:00.356 CC lib/event/scheduler_static.o 00:32:00.614 LIB libspdk_event.a 00:32:00.614 LIB libspdk_nvme.a 00:32:00.871 LIB libspdk_blob.a 00:32:00.871 CC lib/lvol/lvol.o 00:32:00.871 CC lib/blobfs/blobfs.o 00:32:00.871 CC lib/blobfs/tree.o 00:32:01.475 LIB libspdk_blobfs.a 00:32:01.475 LIB libspdk_lvol.a 00:32:01.475 LIB libspdk_bdev.a 00:32:01.475 CC lib/nvmf/ctrlr.o 00:32:01.475 CC lib/nvmf/ctrlr_discovery.o 00:32:01.475 CC lib/scsi/dev.o 00:32:01.475 CC lib/nvmf/ctrlr_bdev.o 00:32:01.475 CC lib/scsi/lun.o 00:32:01.475 CC lib/nvmf/nvmf.o 00:32:01.475 CC lib/nvmf/subsystem.o 00:32:01.475 CC lib/scsi/port.o 00:32:01.475 CC lib/nbd/nbd.o 00:32:01.475 CC lib/ftl/ftl_core.o 00:32:01.756 CC lib/ftl/ftl_init.o 00:32:01.756 CC lib/ftl/ftl_layout.o 00:32:01.756 CC lib/scsi/scsi.o 00:32:01.756 CC lib/scsi/scsi_bdev.o 00:32:01.756 CC lib/nbd/nbd_rpc.o 00:32:01.756 CC lib/scsi/scsi_pr.o 00:32:01.756 CC lib/ftl/ftl_debug.o 00:32:01.756 CC lib/ftl/ftl_io.o 00:32:01.756 CC lib/ftl/ftl_sb.o 00:32:02.014 CC lib/nvmf/nvmf_rpc.o 00:32:02.014 LIB libspdk_nbd.a 00:32:02.014 CC lib/ftl/ftl_l2p.o 00:32:02.014 CC lib/scsi/scsi_rpc.o 00:32:02.014 CC lib/nvmf/transport.o 00:32:02.014 CC lib/nvmf/tcp.o 00:32:02.014 CC lib/nvmf/rdma.o 00:32:02.014 CC lib/ftl/ftl_l2p_flat.o 00:32:02.014 CC lib/ftl/ftl_nv_cache.o 00:32:02.014 CC lib/scsi/task.o 00:32:02.014 CC lib/ftl/ftl_band.o 00:32:02.014 CC lib/ftl/ftl_band_ops.o 00:32:02.014 CC lib/ftl/ftl_writer.o 00:32:02.272 CC lib/ftl/ftl_rq.o 00:32:02.272 LIB libspdk_scsi.a 00:32:02.272 CC lib/ftl/ftl_reloc.o 00:32:02.272 CC lib/iscsi/conn.o 00:32:02.272 CC lib/ftl/ftl_l2p_cache.o 00:32:02.272 CC lib/vhost/vhost.o 00:32:02.272 CC lib/vhost/vhost_rpc.o 00:32:02.272 CC lib/ftl/ftl_p2l.o 00:32:02.272 CC lib/ftl/mngt/ftl_mngt.o 00:32:02.531 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:32:02.531 CC lib/vhost/vhost_scsi.o 00:32:02.531 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:32:02.531 CC lib/vhost/vhost_blk.o 00:32:02.531 CC lib/ftl/mngt/ftl_mngt_startup.o 00:32:02.531 CC lib/iscsi/init_grp.o 00:32:02.531 CC lib/iscsi/iscsi.o 00:32:02.531 CC lib/ftl/mngt/ftl_mngt_md.o 00:32:02.789 CC lib/iscsi/md5.o 00:32:02.789 CC lib/ftl/mngt/ftl_mngt_misc.o 00:32:02.789 LIB libspdk_nvmf.a 00:32:02.789 CC lib/iscsi/param.o 00:32:02.789 CC lib/iscsi/portal_grp.o 00:32:02.789 CC lib/iscsi/tgt_node.o 00:32:02.789 CC lib/vhost/rte_vhost_user.o 00:32:02.789 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:32:02.789 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:32:03.047 CC lib/iscsi/iscsi_subsystem.o 00:32:03.048 CC lib/iscsi/iscsi_rpc.o 00:32:03.048 CC lib/ftl/mngt/ftl_mngt_band.o 00:32:03.048 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:32:03.048 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:32:03.048 CC lib/iscsi/task.o 00:32:03.048 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:32:03.305 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:32:03.305 CC lib/ftl/utils/ftl_conf.o 00:32:03.305 CC lib/ftl/utils/ftl_md.o 00:32:03.305 CC lib/ftl/utils/ftl_mempool.o 00:32:03.305 CC lib/ftl/utils/ftl_bitmap.o 00:32:03.305 CC lib/ftl/utils/ftl_property.o 00:32:03.305 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:32:03.305 LIB libspdk_iscsi.a 00:32:03.305 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:32:03.305 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:32:03.305 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:32:03.305 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:32:03.305 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:32:03.305 CC lib/ftl/upgrade/ftl_sb_v3.o 00:32:03.563 CC lib/ftl/upgrade/ftl_sb_v5.o 00:32:03.563 CC lib/ftl/nvc/ftl_nvc_dev.o 00:32:03.563 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:32:03.563 CC lib/ftl/base/ftl_base_dev.o 00:32:03.563 CC lib/ftl/base/ftl_base_bdev.o 00:32:03.563 LIB libspdk_vhost.a 00:32:03.821 LIB libspdk_ftl.a 00:32:03.821 CC module/env_dpdk/env_dpdk_rpc.o 00:32:03.821 CC module/scheduler/gscheduler/gscheduler.o 00:32:03.821 CC module/accel/iaa/accel_iaa.o 00:32:03.821 CC module/sock/posix/posix.o 00:32:03.821 CC module/accel/error/accel_error.o 00:32:03.821 CC module/accel/dsa/accel_dsa.o 00:32:03.821 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:32:03.821 CC module/accel/ioat/accel_ioat.o 00:32:03.821 CC module/scheduler/dynamic/scheduler_dynamic.o 00:32:04.080 CC module/blob/bdev/blob_bdev.o 00:32:04.080 LIB libspdk_env_dpdk_rpc.a 00:32:04.080 CC module/accel/ioat/accel_ioat_rpc.o 00:32:04.080 LIB libspdk_scheduler_gscheduler.a 00:32:04.080 LIB libspdk_scheduler_dpdk_governor.a 00:32:04.080 CC module/accel/iaa/accel_iaa_rpc.o 00:32:04.080 CC module/accel/error/accel_error_rpc.o 00:32:04.080 CC module/accel/dsa/accel_dsa_rpc.o 00:32:04.080 LIB libspdk_scheduler_dynamic.a 00:32:04.080 LIB libspdk_accel_ioat.a 00:32:04.080 LIB libspdk_blob_bdev.a 00:32:04.080 LIB libspdk_accel_iaa.a 00:32:04.080 LIB libspdk_accel_error.a 00:32:04.339 LIB libspdk_accel_dsa.a 00:32:04.339 CC module/bdev/error/vbdev_error.o 00:32:04.339 CC module/bdev/malloc/bdev_malloc.o 00:32:04.339 CC module/bdev/gpt/gpt.o 00:32:04.339 CC module/bdev/delay/vbdev_delay.o 00:32:04.339 CC module/blobfs/bdev/blobfs_bdev.o 00:32:04.339 CC module/bdev/lvol/vbdev_lvol.o 00:32:04.339 CC module/bdev/passthru/vbdev_passthru.o 00:32:04.339 CC module/bdev/nvme/bdev_nvme.o 00:32:04.339 CC module/bdev/null/bdev_null.o 00:32:04.339 LIB libspdk_sock_posix.a 00:32:04.339 CC module/bdev/null/bdev_null_rpc.o 00:32:04.339 CC module/bdev/gpt/vbdev_gpt.o 00:32:04.339 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:32:04.597 CC module/bdev/error/vbdev_error_rpc.o 00:32:04.597 CC module/bdev/delay/vbdev_delay_rpc.o 00:32:04.597 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:32:04.597 LIB libspdk_bdev_null.a 00:32:04.597 CC module/bdev/malloc/bdev_malloc_rpc.o 00:32:04.597 CC module/bdev/nvme/bdev_nvme_rpc.o 00:32:04.597 CC module/bdev/nvme/nvme_rpc.o 00:32:04.597 LIB libspdk_blobfs_bdev.a 00:32:04.597 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:32:04.597 LIB libspdk_bdev_gpt.a 00:32:04.597 LIB libspdk_bdev_error.a 00:32:04.597 LIB libspdk_bdev_delay.a 00:32:04.597 LIB libspdk_bdev_passthru.a 00:32:04.597 CC module/bdev/nvme/bdev_mdns_client.o 00:32:04.597 LIB libspdk_bdev_malloc.a 00:32:04.597 CC module/bdev/raid/bdev_raid.o 00:32:04.597 CC module/bdev/raid/bdev_raid_rpc.o 00:32:04.597 CC module/bdev/split/vbdev_split.o 00:32:04.597 CC module/bdev/zone_block/vbdev_zone_block.o 00:32:04.871 CC module/bdev/split/vbdev_split_rpc.o 00:32:04.871 CC module/bdev/aio/bdev_aio.o 00:32:04.871 CC module/bdev/aio/bdev_aio_rpc.o 00:32:04.871 LIB libspdk_bdev_lvol.a 00:32:04.871 CC module/bdev/raid/bdev_raid_sb.o 00:32:04.871 CC module/bdev/raid/raid0.o 00:32:04.871 LIB libspdk_bdev_split.a 00:32:04.871 CC module/bdev/raid/raid1.o 00:32:04.871 CC module/bdev/ftl/bdev_ftl.o 00:32:04.871 CC module/bdev/iscsi/bdev_iscsi.o 00:32:04.871 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:32:04.871 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:32:05.156 LIB libspdk_bdev_aio.a 00:32:05.156 CC module/bdev/ftl/bdev_ftl_rpc.o 00:32:05.156 CC module/bdev/raid/concat.o 00:32:05.156 CC module/bdev/raid/raid5f.o 00:32:05.156 CC module/bdev/nvme/vbdev_opal.o 00:32:05.156 CC module/bdev/nvme/vbdev_opal_rpc.o 00:32:05.156 LIB libspdk_bdev_zone_block.a 00:32:05.156 CC module/bdev/virtio/bdev_virtio_scsi.o 00:32:05.156 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:32:05.156 CC module/bdev/virtio/bdev_virtio_blk.o 00:32:05.156 CC module/bdev/virtio/bdev_virtio_rpc.o 00:32:05.156 LIB libspdk_bdev_ftl.a 00:32:05.156 LIB libspdk_bdev_iscsi.a 00:32:05.414 LIB libspdk_bdev_nvme.a 00:32:05.414 LIB libspdk_bdev_raid.a 00:32:05.414 LIB libspdk_bdev_virtio.a 00:32:05.672 CC module/event/subsystems/sock/sock.o 00:32:05.672 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:32:05.672 CC module/event/subsystems/vmd/vmd.o 00:32:05.672 CC module/event/subsystems/iobuf/iobuf.o 00:32:05.672 CC module/event/subsystems/scheduler/scheduler.o 00:32:05.672 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:32:05.672 CC module/event/subsystems/vmd/vmd_rpc.o 00:32:05.672 LIB libspdk_event_scheduler.a 00:32:05.672 LIB libspdk_event_vhost_blk.a 00:32:05.931 LIB libspdk_event_sock.a 00:32:05.931 LIB libspdk_event_vmd.a 00:32:05.931 LIB libspdk_event_iobuf.a 00:32:05.931 CC module/event/subsystems/accel/accel.o 00:32:06.189 LIB libspdk_event_accel.a 00:32:06.189 CC module/event/subsystems/bdev/bdev.o 00:32:06.447 LIB libspdk_event_bdev.a 00:32:06.447 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:32:06.447 CC module/event/subsystems/scsi/scsi.o 00:32:06.447 CC module/event/subsystems/nbd/nbd.o 00:32:06.447 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:32:06.705 LIB libspdk_event_nbd.a 00:32:06.705 LIB libspdk_event_scsi.a 00:32:06.705 LIB libspdk_event_nvmf.a 00:32:06.705 CC module/event/subsystems/iscsi/iscsi.o 00:32:06.705 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:32:06.963 LIB libspdk_event_vhost_scsi.a 00:32:06.963 LIB libspdk_event_iscsi.a 00:32:06.963 CXX app/trace/trace.o 00:32:07.230 CC app/trace_record/trace_record.o 00:32:07.230 CC app/spdk_lspci/spdk_lspci.o 00:32:07.230 CC app/spdk_nvme_identify/identify.o 00:32:07.230 CC app/spdk_nvme_perf/perf.o 00:32:07.230 CC app/iscsi_tgt/iscsi_tgt.o 00:32:07.230 CC app/nvmf_tgt/nvmf_main.o 00:32:07.230 CC examples/accel/perf/accel_perf.o 00:32:07.230 CC app/spdk_tgt/spdk_tgt.o 00:32:07.230 CC test/accel/dif/dif.o 00:32:07.230 LINK spdk_lspci 00:32:07.230 LINK spdk_trace_record 00:32:07.230 LINK nvmf_tgt 00:32:07.488 LINK iscsi_tgt 00:32:07.489 LINK spdk_tgt 00:32:07.489 LINK spdk_trace 00:32:07.489 LINK accel_perf 00:32:07.489 LINK dif 00:32:07.489 LINK spdk_nvme_identify 00:32:07.747 LINK spdk_nvme_perf 00:32:11.930 CC test/app/bdev_svc/bdev_svc.o 00:32:12.496 LINK bdev_svc 00:32:20.646 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:32:22.547 LINK nvme_fuzz 00:32:30.658 CC examples/bdev/hello_world/hello_bdev.o 00:32:30.658 LINK hello_bdev 00:32:32.033 CC examples/bdev/bdevperf/bdevperf.o 00:32:35.353 LINK bdevperf 00:32:40.617 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:32:43.165 CC app/spdk_nvme_discover/discovery_aer.o 00:32:44.098 LINK spdk_nvme_discover 00:32:46.029 LINK iscsi_fuzz 00:33:12.568 CC examples/blob/hello_world/hello_blob.o 00:33:12.568 LINK hello_blob 00:33:30.646 CC examples/blob/cli/blobcli.o 00:33:30.646 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:33:31.579 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:33:31.579 LINK blobcli 00:33:34.140 LINK vhost_fuzz 00:34:06.210 CC test/app/histogram_perf/histogram_perf.o 00:34:06.777 LINK histogram_perf 00:34:28.787 CC test/app/jsoncat/jsoncat.o 00:34:28.787 LINK jsoncat 00:34:28.787 CC test/app/stub/stub.o 00:34:28.787 LINK stub 00:34:40.985 CC examples/ioat/perf/perf.o 00:34:41.916 LINK ioat_perf 00:34:44.440 CC app/spdk_top/spdk_top.o 00:34:44.698 CC app/vhost/vhost.o 00:34:46.074 LINK vhost 00:34:47.004 LINK spdk_top 00:34:51.222 CC examples/ioat/verify/verify.o 00:34:51.788 LINK verify 00:34:55.976 CC examples/nvme/hello_world/hello_world.o 00:34:55.976 CC examples/nvme/reconnect/reconnect.o 00:34:55.976 LINK hello_world 00:34:56.542 LINK reconnect 00:34:58.444 CC examples/nvme/nvme_manage/nvme_manage.o 00:34:59.011 CC examples/nvme/arbitration/arbitration.o 00:35:00.385 CC examples/nvme/hotplug/hotplug.o 00:35:00.385 LINK nvme_manage 00:35:00.385 LINK arbitration 00:35:01.327 LINK hotplug 00:35:02.261 CC examples/nvme/cmb_copy/cmb_copy.o 00:35:03.207 CC test/bdev/bdevio/bdevio.o 00:35:03.465 LINK cmb_copy 00:35:04.843 LINK bdevio 00:35:31.410 CC app/spdk_dd/spdk_dd.o 00:35:31.410 LINK spdk_dd 00:35:31.410 CC app/fio/nvme/fio_plugin.o 00:35:31.410 LINK spdk_nvme 00:35:31.410 CC app/fio/bdev/fio_plugin.o 00:35:32.786 LINK spdk_bdev 00:35:34.162 CC examples/nvme/abort/abort.o 00:35:34.162 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:35:34.729 LINK pmr_persistence 00:35:34.729 LINK abort 00:35:35.296 TEST_HEADER include/spdk/config.h 00:35:35.296 CXX test/cpp_headers/accel.o 00:35:35.296 CC test/blobfs/mkfs/mkfs.o 00:35:35.581 CC examples/sock/hello_world/hello_sock.o 00:35:35.839 CXX test/cpp_headers/accel_module.o 00:35:36.098 LINK mkfs 00:35:36.357 LINK hello_sock 00:35:36.615 CXX test/cpp_headers/assert.o 00:35:37.185 CXX test/cpp_headers/barrier.o 00:35:38.120 CXX test/cpp_headers/base64.o 00:35:39.056 CXX test/cpp_headers/bdev.o 00:35:40.432 CXX test/cpp_headers/bdev_module.o 00:35:42.333 CXX test/cpp_headers/bdev_zone.o 00:35:43.268 CXX test/cpp_headers/bit_array.o 00:35:44.643 CXX test/cpp_headers/bit_pool.o 00:35:46.018 CXX test/cpp_headers/blob.o 00:35:47.400 CXX test/cpp_headers/blob_bdev.o 00:35:48.777 CXX test/cpp_headers/blobfs.o 00:35:50.676 CXX test/cpp_headers/blobfs_bdev.o 00:35:52.067 CXX test/cpp_headers/conf.o 00:35:53.466 CXX test/cpp_headers/config.o 00:35:53.466 CXX test/cpp_headers/cpuset.o 00:35:54.862 CXX test/cpp_headers/crc16.o 00:35:56.240 CXX test/cpp_headers/crc32.o 00:35:57.611 CXX test/cpp_headers/crc64.o 00:35:58.986 CXX test/cpp_headers/dif.o 00:36:00.361 CC examples/vmd/lsvmd/lsvmd.o 00:36:00.620 CXX test/cpp_headers/dma.o 00:36:01.241 LINK lsvmd 00:36:01.807 CXX test/cpp_headers/endian.o 00:36:03.182 CXX test/cpp_headers/env.o 00:36:04.558 CXX test/cpp_headers/env_dpdk.o 00:36:05.519 CXX test/cpp_headers/event.o 00:36:06.460 CXX test/cpp_headers/fd.o 00:36:07.835 CXX test/cpp_headers/fd_group.o 00:36:09.213 CXX test/cpp_headers/file.o 00:36:10.150 CXX test/cpp_headers/ftl.o 00:36:11.085 CXX test/cpp_headers/gpt_spec.o 00:36:12.460 CXX test/cpp_headers/hexlify.o 00:36:12.718 CXX test/cpp_headers/histogram_data.o 00:36:12.982 CXX test/cpp_headers/idxd.o 00:36:13.915 CXX test/cpp_headers/idxd_spec.o 00:36:14.173 CC examples/nvmf/nvmf/nvmf.o 00:36:14.738 CC examples/util/zipf/zipf.o 00:36:14.996 CXX test/cpp_headers/init.o 00:36:15.562 LINK zipf 00:36:15.819 CXX test/cpp_headers/ioat.o 00:36:15.819 LINK nvmf 00:36:16.752 CXX test/cpp_headers/ioat_spec.o 00:36:16.753 CC test/dma/test_dma/test_dma.o 00:36:18.126 CXX test/cpp_headers/iscsi_spec.o 00:36:18.691 LINK test_dma 00:36:18.949 CXX test/cpp_headers/json.o 00:36:19.883 CXX test/cpp_headers/jsonrpc.o 00:36:21.278 CXX test/cpp_headers/likely.o 00:36:22.214 CXX test/cpp_headers/log.o 00:36:22.214 CC examples/thread/thread/thread_ex.o 00:36:23.150 CXX test/cpp_headers/lvol.o 00:36:23.716 LINK thread 00:36:23.975 CXX test/cpp_headers/memory.o 00:36:24.910 CXX test/cpp_headers/mmio.o 00:36:25.844 CXX test/cpp_headers/nbd.o 00:36:25.844 CXX test/cpp_headers/notify.o 00:36:26.779 CXX test/cpp_headers/nvme.o 00:36:27.762 CXX test/cpp_headers/nvme_intel.o 00:36:28.329 CC examples/vmd/led/led.o 00:36:28.895 CXX test/cpp_headers/nvme_ocssd.o 00:36:29.153 LINK led 00:36:29.718 CXX test/cpp_headers/nvme_ocssd_spec.o 00:36:30.648 CXX test/cpp_headers/nvme_spec.o 00:36:31.579 CXX test/cpp_headers/nvme_zns.o 00:36:32.949 CXX test/cpp_headers/nvmf.o 00:36:34.324 CXX test/cpp_headers/nvmf_cmd.o 00:36:35.698 CXX test/cpp_headers/nvmf_fc_spec.o 00:36:37.071 CXX test/cpp_headers/nvmf_spec.o 00:36:38.991 CXX test/cpp_headers/nvmf_transport.o 00:36:40.890 CXX test/cpp_headers/opal.o 00:36:42.266 CXX test/cpp_headers/opal_spec.o 00:36:43.648 CXX test/cpp_headers/pci_ids.o 00:36:45.023 CXX test/cpp_headers/pipe.o 00:36:46.399 CXX test/cpp_headers/queue.o 00:36:46.657 CXX test/cpp_headers/reduce.o 00:36:46.917 CXX test/cpp_headers/rpc.o 00:36:48.328 CXX test/cpp_headers/scheduler.o 00:36:48.894 CC test/env/mem_callbacks/mem_callbacks.o 00:36:49.829 CXX test/cpp_headers/scsi.o 00:36:50.397 LINK mem_callbacks 00:36:51.774 CXX test/cpp_headers/scsi_spec.o 00:36:53.150 CXX test/cpp_headers/sock.o 00:36:54.524 CXX test/cpp_headers/stdinc.o 00:36:55.540 CXX test/cpp_headers/string.o 00:36:56.476 CC test/env/vtophys/vtophys.o 00:36:56.476 CXX test/cpp_headers/thread.o 00:36:57.408 LINK vtophys 00:36:57.666 CXX test/cpp_headers/trace.o 00:36:59.116 CXX test/cpp_headers/trace_parser.o 00:37:00.049 CXX test/cpp_headers/tree.o 00:37:00.307 CXX test/cpp_headers/ublk.o 00:37:01.239 CXX test/cpp_headers/util.o 00:37:01.496 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:37:02.063 CXX test/cpp_headers/uuid.o 00:37:02.381 LINK env_dpdk_post_init 00:37:03.315 CXX test/cpp_headers/version.o 00:37:03.315 CXX test/cpp_headers/vfio_user_pci.o 00:37:04.690 CXX test/cpp_headers/vfio_user_spec.o 00:37:05.257 CC test/env/memory/memory_ut.o 00:37:05.515 CXX test/cpp_headers/vhost.o 00:37:06.454 CXX test/cpp_headers/vmd.o 00:37:07.825 CXX test/cpp_headers/xor.o 00:37:07.825 LINK memory_ut 00:37:08.759 CXX test/cpp_headers/zipf.o 00:37:10.724 CC test/event/event_perf/event_perf.o 00:37:11.656 LINK event_perf 00:37:12.587 CC test/event/reactor/reactor.o 00:37:13.533 LINK reactor 00:37:17.729 CC test/event/reactor_perf/reactor_perf.o 00:37:19.127 LINK reactor_perf 00:37:31.324 CC test/event/app_repeat/app_repeat.o 00:37:32.259 LINK app_repeat 00:37:32.840 CC test/env/pci/pci_ut.o 00:37:34.236 LINK pci_ut 00:37:34.236 CC test/event/scheduler/scheduler.o 00:37:34.802 LINK scheduler 00:37:35.060 CC test/lvol/esnap/esnap.o 00:37:35.992 CC test/rpc_client/rpc_client_test.o 00:37:35.992 CC test/nvme/aer/aer.o 00:37:36.558 LINK rpc_client_test 00:37:37.124 LINK aer 00:37:39.694 CC test/thread/poller_perf/poller_perf.o 00:37:39.951 LINK poller_perf 00:37:41.852 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:37:42.418 LINK histogram_ut 00:37:47.678 LINK esnap 00:37:47.678 CC test/unit/lib/accel/accel.c/accel_ut.o 00:37:53.043 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:37:56.322 LINK accel_ut 00:38:00.519 CC test/thread/lock/spdk_lock.o 00:38:04.705 LINK spdk_lock 00:38:06.604 CC test/unit/lib/bdev/part.c/part_ut.o 00:38:09.140 LINK bdev_ut 00:38:10.581 CC test/nvme/reset/reset.o 00:38:11.518 LINK reset 00:38:11.518 CC test/nvme/sgl/sgl.o 00:38:12.890 LINK sgl 00:38:13.840 LINK part_ut 00:38:17.128 CC test/nvme/e2edp/nvme_dp.o 00:38:17.386 CC test/nvme/overhead/overhead.o 00:38:17.951 LINK nvme_dp 00:38:18.515 CC test/nvme/err_injection/err_injection.o 00:38:19.080 LINK overhead 00:38:19.645 LINK err_injection 00:38:20.210 CC test/nvme/startup/startup.o 00:38:20.775 LINK startup 00:38:26.034 CC test/nvme/reserve/reserve.o 00:38:27.403 LINK reserve 00:38:29.931 CC examples/idxd/perf/perf.o 00:38:31.830 LINK idxd_perf 00:38:46.704 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:38:46.704 CC examples/interrupt_tgt/interrupt_tgt.o 00:38:46.704 LINK interrupt_tgt 00:38:46.704 LINK scsi_nvme_ut 00:38:47.270 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:38:47.837 CC test/nvme/simple_copy/simple_copy.o 00:38:47.837 LINK gpt_ut 00:38:47.837 CC test/nvme/connect_stress/connect_stress.o 00:38:48.095 LINK connect_stress 00:38:48.353 LINK simple_copy 00:38:48.353 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:38:48.920 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:38:48.920 CC test/nvme/boot_partition/boot_partition.o 00:38:48.920 LINK boot_partition 00:38:49.486 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:38:49.744 LINK vbdev_lvol_ut 00:38:51.118 LINK blob_bdev_ut 00:38:51.377 CC test/unit/lib/blob/blob.c/blob_ut.o 00:38:54.660 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:38:55.595 LINK tree_ut 00:38:55.595 LINK bdev_ut 00:39:02.156 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:39:02.156 CC test/unit/lib/dma/dma.c/dma_ut.o 00:39:02.156 CC test/nvme/compliance/nvme_compliance.o 00:39:03.613 LINK dma_ut 00:39:03.871 LINK nvme_compliance 00:39:05.248 LINK blobfs_async_ut 00:39:08.533 LINK blob_ut 00:39:08.791 CC test/unit/lib/event/app.c/app_ut.o 00:39:10.692 LINK app_ut 00:39:11.628 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:39:12.564 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:39:13.994 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:39:13.994 LINK bdev_raid_sb_ut 00:39:15.367 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:39:15.368 LINK concat_ut 00:39:15.626 LINK bdev_raid_ut 00:39:16.559 LINK raid1_ut 00:39:17.126 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:39:18.501 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:39:18.501 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:39:19.081 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:39:19.340 LINK reactor_ut 00:39:19.598 LINK ioat_ut 00:39:20.164 LINK blobfs_bdev_ut 00:39:21.539 LINK blobfs_sync_ut 00:39:22.914 CC test/nvme/fused_ordering/fused_ordering.o 00:39:22.914 CC test/nvme/doorbell_aers/doorbell_aers.o 00:39:23.481 CC test/nvme/fdp/fdp.o 00:39:23.481 LINK fused_ordering 00:39:23.739 LINK doorbell_aers 00:39:24.306 LINK fdp 00:39:24.565 CC test/nvme/cuse/cuse.o 00:39:25.131 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:39:25.697 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:39:25.956 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:39:26.892 LINK cuse 00:39:26.892 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:39:27.150 LINK conn_ut 00:39:27.150 LINK init_grp_ut 00:39:28.085 LINK bdev_zone_ut 00:39:28.085 LINK raid5f_ut 00:39:33.349 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:39:33.349 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:39:33.349 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:39:33.349 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:39:33.641 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:39:33.641 LINK json_util_ut 00:39:33.900 LINK jsonrpc_server_ut 00:39:34.835 LINK vbdev_zone_block_ut 00:39:35.769 LINK json_parse_ut 00:39:36.704 CC test/unit/lib/iscsi/param.c/param_ut.o 00:39:36.961 CC test/unit/lib/log/log.c/log_ut.o 00:39:37.533 LINK iscsi_ut 00:39:38.100 LINK log_ut 00:39:38.100 LINK param_ut 00:39:41.384 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:39:41.384 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:39:41.642 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:39:41.900 LINK portal_grp_ut 00:39:41.900 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:39:41.900 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:39:42.158 LINK tgt_node_ut 00:39:42.725 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:39:42.725 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:39:42.983 LINK json_write_ut 00:39:42.983 LINK lvol_ut 00:39:42.983 CC test/unit/lib/notify/notify.c/notify_ut.o 00:39:43.241 LINK notify_ut 00:39:44.194 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:39:44.194 LINK nvme_ut 00:39:44.451 LINK bdev_nvme_ut 00:39:44.709 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:39:44.967 LINK tcp_ut 00:39:44.967 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:39:45.225 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:39:45.483 LINK ctrlr_ut 00:39:45.483 LINK dev_ut 00:39:46.049 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:39:46.307 LINK subsystem_ut 00:39:47.240 LINK ctrlr_bdev_ut 00:39:47.240 LINK ctrlr_discovery_ut 00:39:48.616 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:39:49.261 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:39:50.637 LINK nvmf_ut 00:39:53.167 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:39:53.424 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:39:53.424 LINK lun_ut 00:39:53.424 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:39:53.682 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:39:53.682 LINK nvme_ctrlr_ut 00:39:53.682 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:39:53.682 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:39:53.959 CC test/unit/lib/sock/sock.c/sock_ut.o 00:39:54.247 LINK nvme_ctrlr_ocssd_cmd_ut 00:39:54.505 CC test/unit/lib/sock/posix.c/posix_ut.o 00:39:54.505 LINK nvme_ctrlr_cmd_ut 00:39:54.762 LINK nvme_ns_ut 00:39:54.762 LINK rdma_ut 00:39:55.019 LINK sock_ut 00:39:55.596 LINK transport_ut 00:39:55.596 LINK posix_ut 00:39:56.530 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:39:56.530 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:39:57.096 LINK scsi_ut 00:39:57.662 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:39:58.228 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:39:58.794 LINK scsi_pr_ut 00:39:59.052 LINK scsi_bdev_ut 00:39:59.311 LINK nvme_ns_cmd_ut 00:39:59.311 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:39:59.311 CC test/unit/lib/thread/thread.c/thread_ut.o 00:39:59.568 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:40:00.134 LINK iobuf_ut 00:40:00.701 CC test/unit/lib/util/base64.c/base64_ut.o 00:40:00.701 LINK thread_ut 00:40:00.959 LINK nvme_ns_ocssd_cmd_ut 00:40:00.959 LINK base64_ut 00:40:01.894 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:40:02.152 LINK bit_array_ut 00:40:02.410 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:40:02.410 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:40:02.669 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:40:02.669 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:40:02.669 LINK crc16_ut 00:40:02.669 LINK cpuset_ut 00:40:02.927 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:40:02.927 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:40:02.927 LINK crc32_ieee_ut 00:40:02.927 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:40:03.185 LINK crc32c_ut 00:40:03.185 CC test/unit/lib/util/dif.c/dif_ut.o 00:40:03.185 LINK crc64_ut 00:40:03.185 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:40:03.185 LINK nvme_poll_group_ut 00:40:03.444 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:40:03.444 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:40:03.444 CC test/unit/lib/util/iov.c/iov_ut.o 00:40:03.703 CC test/unit/lib/util/math.c/math_ut.o 00:40:03.703 LINK nvme_pcie_ut 00:40:03.703 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:40:03.703 LINK math_ut 00:40:03.703 LINK iov_ut 00:40:03.703 LINK dif_ut 00:40:03.962 LINK nvme_quirks_ut 00:40:03.962 LINK pipe_ut 00:40:03.962 LINK nvme_qpair_ut 00:40:04.528 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:40:05.095 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:40:05.354 LINK nvme_tcp_ut 00:40:05.354 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:40:05.354 LINK nvme_transport_ut 00:40:05.354 CC test/unit/lib/util/string.c/string_ut.o 00:40:05.613 CC test/unit/lib/util/xor.c/xor_ut.o 00:40:05.613 LINK string_ut 00:40:05.872 LINK nvme_io_msg_ut 00:40:06.130 LINK xor_ut 00:40:06.698 LINK nvme_pcie_common_ut 00:40:06.698 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:40:06.698 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:40:06.956 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:40:07.214 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:40:07.214 LINK nvme_opal_ut 00:40:07.472 LINK nvme_fabric_ut 00:40:07.472 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:40:07.472 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:40:07.731 LINK pci_event_ut 00:40:07.731 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:40:07.731 LINK subsystem_ut 00:40:07.989 LINK rpc_ut 00:40:07.989 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:40:07.989 LINK nvme_rdma_ut 00:40:08.247 LINK nvme_cuse_ut 00:40:08.247 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:40:08.247 LINK idxd_user_ut 00:40:08.505 LINK idxd_ut 00:40:08.763 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:40:09.330 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:40:09.330 CC test/unit/lib/rdma/common.c/common_ut.o 00:40:09.330 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:40:09.588 LINK common_ut 00:40:09.588 LINK ftl_l2p_ut 00:40:09.847 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:40:09.847 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:40:09.847 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:40:09.847 LINK ftl_band_ut 00:40:09.847 LINK vhost_ut 00:40:10.105 LINK ftl_bitmap_ut 00:40:10.105 LINK ftl_mempool_ut 00:40:10.105 LINK ftl_io_ut 00:40:10.363 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:40:10.363 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:40:10.363 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:40:10.931 LINK ftl_mngt_ut 00:40:10.931 LINK ftl_layout_upgrade_ut 00:40:11.867 LINK ftl_sb_ut 00:41:08.082 json_parse_ut.c: In function ‘test_parse_nesting’: 00:41:08.082 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:41:08.082 616 | test_parse_nesting(void) 00:41:08.082 | ^ 00:41:08.082 11:06:34 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:41:08.339 make[1]: Nothing to be done for 'clean'. 00:41:12.525 11:06:38 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:41:12.525 11:06:38 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:41:12.525 11:06:38 -- common/autotest_common.sh@10 -- $ set +x 00:41:12.525 11:06:38 -- spdk/autopackage.sh@48 -- $ timing_finish 00:41:12.525 11:06:38 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:41:12.525 11:06:38 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:41:12.525 11:06:38 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:41:12.525 + [[ -n 2272 ]] 00:41:12.525 + sudo kill 2272 00:41:12.532 [Pipeline] } 00:41:12.544 [Pipeline] // timeout 00:41:12.549 [Pipeline] } 00:41:12.561 [Pipeline] // stage 00:41:12.565 [Pipeline] } 00:41:12.577 [Pipeline] // catchError 00:41:12.583 [Pipeline] stage 00:41:12.584 [Pipeline] { (Stop VM) 00:41:12.595 [Pipeline] sh 00:41:12.870 + vagrant halt 00:41:16.186 ==> default: Halting domain... 00:41:26.184 [Pipeline] sh 00:41:26.545 + vagrant destroy -f 00:41:29.829 ==> default: Removing domain... 00:41:30.407 [Pipeline] sh 00:41:30.686 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest_2/output 00:41:30.695 [Pipeline] } 00:41:30.714 [Pipeline] // stage 00:41:30.720 [Pipeline] } 00:41:30.739 [Pipeline] // dir 00:41:30.744 [Pipeline] } 00:41:30.760 [Pipeline] // wrap 00:41:30.766 [Pipeline] } 00:41:30.781 [Pipeline] // catchError 00:41:30.791 [Pipeline] stage 00:41:30.793 [Pipeline] { (Epilogue) 00:41:30.807 [Pipeline] sh 00:41:31.097 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:41:53.073 [Pipeline] catchError 00:41:53.075 [Pipeline] { 00:41:53.091 [Pipeline] sh 00:41:53.371 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:41:53.631 Artifacts sizes are good 00:41:53.640 [Pipeline] } 00:41:53.659 [Pipeline] // catchError 00:41:53.671 [Pipeline] archiveArtifacts 00:41:53.679 Archiving artifacts 00:41:54.070 [Pipeline] cleanWs 00:41:54.084 [WS-CLEANUP] Deleting project workspace... 00:41:54.084 [WS-CLEANUP] Deferred wipeout is used... 00:41:54.090 [WS-CLEANUP] done 00:41:54.092 [Pipeline] } 00:41:54.110 [Pipeline] // stage 00:41:54.117 [Pipeline] } 00:41:54.135 [Pipeline] // node 00:41:54.142 [Pipeline] End of Pipeline 00:41:54.180 Finished: SUCCESS